Category Archives: A.I./Robotics

Georgian Technical University Brain-Inspired AI Inspires Insights About The Brain.

Georgian Technical University Brain-Inspired AI Inspires Insights About The Brain.

Context length preference across cortex. An index of context length preference is computed for each voxel in one subject and projected onto that subject’s cortical surface. Voxels (A voxel represents a value on a regular grid in three-dimensional space. As with pixels in a bitmap voxels themselves do not typically have their position explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel (A voxel represents a value on a regular grid in three-dimensional space. As with pixels in a bitmap voxels themselves do not typically have their position explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels. ) based upon its position relative to other voxels) shown in blue are best modeled using short context while red voxels are best modeled with long context. Can artificial intelligence (AI) help us understand how the brain understands language ? Can neuroscience help us understand why artificial intelligence (AI) and neural networks are effective at predicting human perception ? Research from X and Y from Georgian Technical University suggests both are possible. Neural Information Processing Systems the scholars described the results of experiments that used artificial neural networks to predict with greater accuracy than ever before how different areas in the brain respond to specific words. “As words come into our heads, we form ideas of what someone is saying to us and we want to understand how that comes to us inside the brain” said X assistant professor of Neuroscience and Computer Science at Georgian Technical University. “It seems like there should be systems to it but practically that’s just not how language works. Like anything in biology it’s very hard to reduce down to a simple set of equations”. The work employed a type of recurrent neural network called long short-term memory that includes in its calculations the relationships of each word to what came before to better preserve context. “If a word has multiple meanings you infer the meaning of that word for that particular sentence depending on what was said earlier” said Y a PhD student in X’s lab at Georgian Technical University. “Our hypothesis is that this would lead to better predictions of brain activity because the brain cares about context”. It sounds obvious but for decades neuroscience experiments considered the response of the brain to individual words without a sense of their connection to chains of words or sentences.X describes the importance of doing “Georgian Technical University real-world neuroscience”. In their work the researchers ran experiments to test and ultimately predict how different areas in the brain would respond when listening to stories specifically. They used data collected from fMRI (functional magnetic resonance imaging) machines that capture changes in the blood oxygenation level in the brain based on how active groups of neurons are. This serves as a correspondent for where language concepts are “Georgian Technical University represented” in the brain. Using powerful supercomputers at the Georgian Technical University they trained a language model using the long short-term memory method so it could effectively predict what word would come next – a task akin to auto-complete searches which the human mind is particularly adept at. “In trying to predict the next word this model has to implicitly learn all this other stuff about how language works” said X “like which words tend to follow other words without ever actually accessing the brain or any data about the brain”. Based on both the language model and fMRI (functional magnetic resonance imaging) data they trained a system that could predict how the brain would respond when it hears each word in a new story for the first time. Past efforts had shown that it is possible to localize language responses in the brain effectively. However the new research showed that adding the contextual element – in this case up to 20 words that came before – improved brain activity predictions significantly. They found that their predictions improve even when the least amount of context was used. The more context provided the better the accuracy of their predictions. “Our analysis showed that if the long short-term memory incorporates more words then it gets better at predicting the next word” said Y “which means that it must be including information from all the words in the past”. The research went further. It explored which parts of the brain were more sensitive to the amount of context included. They found for instance that concepts that seem to be localized to the auditory cortex were less dependent on context. “If you hear the word dog this area doesn’t care what the 10 words were before that it’s just going to respond to the sound of the word dog X explained. On the other hand brain areas that deal with higher-level thinking were easier to pinpoint when more context was included. This supports theories of the mind and language comprehension. “There was a really nice correspondence between the hierarchy of the artificial network and the hierarchy of the brain which we found interesting” X said. Natural language processing — has taken great strides in recent years. But when it comes to answering questions, having natural conversations, or analyzing the sentiments in written texts natural language processing still has a long way to go. The researchers believe their long short-term memory developed language model can help in these areas. The LSTM (and neural networks in general) works by assigning values in high-dimensional space to individual components here words so that each component can be defined by its thousands of disparate relationships to many other things. The researchers trained the language model by feeding it tens of millions of words drawn from Georgian Technical University posts. Their system then made predictions for how thousands of voxels (A voxel represents a value on a regular grid in three-dimensional space. As with pixels in a bitmap, voxels themselves do not typically have their position explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels) three-dimensional pixels in the brains of six subjects would respond to a second set of stories that neither the model nor the individuals had heard before. Because they were interested in the effects of context length and the effect of individual layers in the neural network they essentially tested 60 different factors 20 lengths of context retention and three different layer dimensions for each subject. All of this leads to computational problems of enormous scale, requiring massive amounts of computing power, memory, storage and data retrieval. Georgian Technical University’s resources were well suited to the problem. The researchers used the Georgian Technical University supercomputer which contains both GPUs (graphics processing unit) and CPUs (central processing unit) for the computing tasks a storage and data management resource to preserve and distribute the data. By parallelizing the problem across many processors they were able to run the computational experiment in weeks rather than years. “To develop these models effectively you need a lot of training data” X said. “That means you have to pass through your entire dataset every time you want to update the weights. And that’s inherently very slow if you don’t use parallel resources like those at Georgian Technical University”. If it sounds complex well — it is. This is leading X and Y to consider a more streamlined version of the system where instead of developing a language prediction model and then applying it to the brain they develop a model that directly predicts brain response. They call this an end-to-end system and it’s where X and Y hope to go in their future research. Such a model would improve its performance directly on brain responses. A wrong prediction of brain activity would feedback into the model and spur improvements. “If this works then it’s possible that this network could learn to read text or intake language similarly to how our brains do” X said. “Imagine Georgian Technical University Translate but it understands what you’re saying instead of just learning a set of rules”. With such a system in place X believes it is only a matter of time until a mind-reading system that can translate brain activity into language is feasible. In the meantime they are gaining insights into both neuroscience and artificial intelligence from their experiments. “The brain is a very effective computation machine and the aim of artificial intelligence is to build machines that are really good at all the tasks a brain can do” Y said. “But we don’t understand a lot about the brain. So we try to use artificial intelligence to first question how the brain works and then based on the insights we gain through this method of interrogation and through theoretical neuroscience we use those results to develop better artificial Intelligence. “The idea is to understand cognitive systems both biological and artificial and to use them in tandem to understand and build better machines”.

 

Georgian Technical University Machine Learning Tracks Moving Cells.

Georgian Technical University Machine Learning Tracks Moving Cells.

A software developed by the Micro/Bio/Nanofluidics Unit allows users to easily segment track and analyze the migration of label-free cells. The tool can be used as an all-in-one solution to quantify cell migration, or can be employed as three separate applications (ie for segmentation, tracking, and data analysis, respectively). Using the machine learning infrastructure known as a ‘Georgian Technical University neural network’ the system allows users to train it on different data sets and analyzes images as a simplified human brain would. Both developing babies and elderly adults share a common characteristic: the many cells making up their bodies are always on the move. As we humans commute to work cells migrate through the body to get their jobs done. Biologists have long struggled to quantify the movement and changing morphology of cells through time but now scientists at the Georgian Technical University (GTU) have devised an elegant tool to do just that. Using machine learning, the researchers designed a software to analyze microscopic snapshots of migrating cells. They named the software word that refers to tracing the outlines of objects as the innovative tool detects the changing outlines of individual cells.  In the womb a baby’s cells migrate to precise locations so that each arm, leg and organ grows in its proper place. Our immune cells race through the body to mend wounds after injury. Cancerous cells metastasize by traveling through the body spreading tumors to new tissues. To test the efficacy of new medicines drug developers track the movement of cells before and after treatment. The software finds applications in all these areas of study and more. “This is an all-in-one solution to get us from raw images to quantitative data on cell migration,” said X. Y is a graduate student and led by Prof. Z. “Our software is at least 100 times faster than manual methods which are currently the gold-standard for these types of experiments because computers are not yet powerful enough”. “We’re hoping this software can become quite useful for the scientific community” said Prof. Z principal investigator of the unit. “For any biological study or drug screening that requires you to track cellular responses to different stimuli you can use this software”. Machine Learning Makes Adaptable.  In order to observe cells under the microscope scientists often steep them in dye or tweak their genes to make them glow in eye-popping colors. But coloring cells alters their movement which in turn skews the experimental results. Some scientists attempt to study cell migration without the help of fluorescent tags using so-called “Georgian Technical University label-free” methods but end up running into a different problem; Label-free cells blend into the background of microscopic images making them incredibly difficult to analyze with existing computer software. Hops this hurdle by allowing scientists to train the software over time. Biologists act as teachers providing the software new images to study so that it can come to recognize one cell from the next. A fast learner the program quickly adapts to new sets of data and can easily track the movement of single cells even if they’re crammed together like commuters on the Georgian Technical University metro. “Most software…cannot tell cells in high-density apart; basically they’re segmenting into a glob,” said Y. “With our software we can segment correctly even if cells are touching. We can actually do single-cell tracking throughout the entire experiment”. Currently the fastest software capable of tracking the movement of label-free cells at single-cell resolution on a personal laptop. Software Mimics the Human Brain.  The researchers designed to process images as if it were a simplified human brain. The strategy enables the software to trace the outlines of individual cells monitor their movement moment to moment and transform that information into crunchable numbers. The program is built around a machine learning infrastructure known as a “convolutional neural network”. roughly based on how brain cells work together to process incoming information from the outside world. When our eyes capture light from the environment they call on neurons to analyze those signals and figure out what we’re looking at and where it is in space. The neurons first sketch out the scene in broad strokes then pass the information on to the next set of cells progressively rendering the image in more and more detail. Neural networks work similarly except each “Georgian Technical University neuron” is a collection of code rather than a physical cell. This design its accuracy and adaptability. Looking forward the researchers aim to develop neural networks to identify different components within cells, rather than just their outlines. With these tools in hand scientists could easily assess whether a cell is healthy or diseased young or old derived from one genetic lineage or another. These programs would have utility in fundamental biology, biotechnology research and beyond.

 

Georgian Technical University How Intelligent Is Artificial Intelligence ?

Georgian Technical University How Intelligent Is Artificial Intelligence ?

The heatmap shows quite clearly that the algorithm makes its ship/not ship decision on the basis of pixels representing water and not on the basis of pixels representing the ship. Artificial Intelligence (AI) and machine learning algorithms such as Deep Learning have become integral parts of our daily lives: they enable digital speech assistants or translation services improve medical diagnostics and are an indispensable part of future technologies such as autonomous driving. Based on an ever increasing amount of data and powerful novel computer architectures learning algorithms appear to reach human capabilities, sometimes even excelling beyond. The issue: so far it often remains unknown to users, how exactly AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems reach their conclusions. Therefore it may often remain unclear, whether the AI’s decision making behavior is truly ‘intelligent’ or whether the procedures are just averagely successful. Researchers from Georgian Technical University and Sulkhan-Saba Orbeliani University have tackled this question and have provided a glimpse into the diverse “intelligence” spectrum observed in current AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems specifically analyzing these AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems with a technology that allows automatized analysis and quantification. The most important prerequisite for this novel technology is a method developed earlier by Georgian Technical University algorithm that allows visualizing according to which input variables AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems make their decisions. Extending Georgian Technical University the Spectral relevance analysis (SpRAy) can identify and quantify a wide spectrum of learned decision making behavior. In this manner it has now become possible to detect undesirable decision making even in very large data sets. This so-called ‘explainable AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)’ has been one of the most important steps towards a practical application of AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) according to Dr. X Professor for Machine Learning at Georgian Technical University. “Specifically in medical diagnosis or in safety-critical systems no AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems that employ flaky or even cheating problem solving strategies should be used”. By using their newly developed algorithms researchers are finally able to put any existing AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) system to a test and also derive quantitative information about them: a whole spectrum starting from naive problem solving behavior to cheating strategies up to highly elaborate “Georgian Technical University intelligent” strategic solutions is observed. Dr. Y group leader at Georgian Technical University said: “We were very surprised by the wide range of learned problem-solving strategies. Even modern AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems have not always found a solution that appears meaningful from a human perspective but sometimes used”. The team around X and Y strategies in various AI systems. For example an AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) system that won several international image classification competitions a few years ago pursued a strategy that can be considered naïve from a human’s point of view. It classified images mainly on the basis of context. Images were assigned to the category “Georgian Technical University ship” when there was a lot of water in the picture. Other images were classified as “Georgian Technical University train” if rails were present. Still other pictures were assigned the correct category by their copyright watermark. The real task namely to detect the concepts of ships or trains, was therefore not solved by this AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) system – even if it indeed classified the majority of images correctly. The researchers were also able to find these types of faulty problem-solving strategies in some of the state-of-the-art AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) algorithms the so-called deep neural networks – algorithms that were so far considered immune against such lapses. These networks based their classification decision in part on artifacts that were created during the preparation of the images and have nothing to do with the actual image content. “Such AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers” said X. “It is quite conceivable that about half of the AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems currently in use implicitly or explicitly rely on such strategies. It’s time to systematically check that so that secure AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems can be developed”. With their new technology, the researchers also identified AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems that have unexpectedly learned “Georgian Technical University smart” strategies. Examples include systems that have learned to play. “Here the AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) clearly understood the concept of the game and found an intelligent way to collect a lot of points in a targeted and low-risk manner. The system sometimes even intervenes in ways that a real player would not” said Y. “Beyond understanding AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) strategies our work establishes the usability of explainable AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) for iterative dataset design, namely for removing artefacts in a dataset which would cause an AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) to learn flawed strategies, as well as helping to decide which unlabeled examples need to be annotated and added so that failures of an AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) system can be reduced” said Georgian Technical University Assistant Professor Z. “Our automated technology is open source and available to all scientists. We see our work as an important first step in making AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems more robust, explainable and secure in the future and more will have to follow. This is an essential prerequisite for general use of AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)” said X.

Georgian Technical University Artificial Intelligence Shows Promise For Skin Cancer Detection.

Georgian Technical University Artificial Intelligence Shows Promise For Skin Cancer Detection.

The same technology that suggests friends for you to tag in photos on social media could provide an exciting new tool to help dermatologists diagnose skin cancer. While artificial intelligence systems for skin cancer detection have shown promise in research settings, however there is still a lot of work to be done before the technology is appropriate for real-world use. “AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems for skin cancer detection are still in their very early stages” says board-certified dermatologist X at Georgian Technical University. “Nothing is 100 percent clear-cut yet”. One murky area is the skin cancer “Georgian Technical University  scores” that AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) algorithms assign to suspicious spots. According to Dr. X it’s not yet clear how a dermatologist would interpret those numbers. The training of AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems presents an even larger barrier. Hundreds of thousands of photos that have been confirmed as benign or malignant are used to teach the technology to recognize skin cancer but all of these images were captured in optimal conditions Dr. X says — they’re not just any old photos snapped with a smartphone. “Just because the computer can read these validated data sets with near 100 percent accuracy doesn’t mean they can read any image” he says. “Everyone has a different phone lighting background”. Board-certified dermatologist Y assistant professor in the division of dermatology at Georgian Technical University finds it troubling that the images used so far in training AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems are almost exclusively of light-skinned patients. “The algorithm is only as good as what you’ve taught it to do” he says. “If you’ve not taught it to diagnose melanoma in skin of color then you’re at risk of not being able to do it when the algorithm is complete”. Although skin cancer is more common in people with lighter skin tones people with skin of color can also develop the disease and they tend to be diagnosed at later stages when it’s more difficult to treat. Moreover Dr. Y says the images used to train AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems for the most part haven’t included lesions on the palms of hands and soles of feet places where people with skin of color are disproportionately affected. “We already know there’s a disparity in how likely you are to have late-stage melanoma depending on skin type” he says. “That disparity could potentially widen if AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) systems are not trained properly”. Dr. X agrees that the training data needs to include more racial diversity, as well as a variety of age groups. He doesn’t think AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) will ever get to the point of being 100 percent accurate in skin cancer detection but like Dr. Y he hopes dermatologists can help shape the technology in its early stages so patients get the best care possible. Dr. X says he would like to see educational content built into skin cancer detection smartphone apps, reminding users that this technology cannot replace a visit with a dermatologist. Dr. Y agrees: “Board-certified dermatologists have years of training and experience in recognizing skin cancer so their judgment should still supersede whatever an algorithm tells you”. Unlike AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) technology board-certified dermatologists don’t just look at one mole to determine whether it’s problematic. They consider several additional factors including the other spots on the patient’s body and the evolution of the lesion in question as well as the individual’s skin type skin cancer history and risk factors and sun protection habits. “Patients need to know that AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) is not a perfect system, and it will never be perfect” Dr. X says. “From a dermatologist’s standpoint we need to know these apps are out there and the technology will continue to grow so it’s important that we continue to embrace it”. “I don’t think the ‘man versus machine’ framing of AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) and machine learning is correct” Dr. Y adds. “It’s going to be more like AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) is going to support the dermatologist and make the dermatologist even better”.

 

Georgian Technical University New AI Able To Identify And Predict The Development Of Cancer Symptom Clusters.

Georgian Technical University New AI Able To Identify And Predict The Development Of Cancer Symptom Clusters.

Cancer patients who undergo chemotherapy could soon benefit from a new AI (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) that is able to identify and predict the development of different combinations of symptoms – helping to alleviate much of the distress caused by their occurrence and severity. Researchers from the Georgian Technical University and the Sulkhan-Saba Orbeliani University detail how they used Network Analysis (NA) to examine the structure and relationships between 38 common symptoms reported by over 1300 cancer patients receiving chemotherapy. Some of the most common symptoms reported by patients were nausea difficulty concentrating, fatigue, drowsiness, dry mouth, hot flushes, numbness and nervousness. The team then grouped these symptoms into three key networks – occurrence, severity and distress. The Network Analysis (NA) allowed the team to identify nausea as central – impacting symptoms across all three different key networks. People are diagnosed with cancer every year – with breast prostate, lung and bowel cancers counting for over half of new cases. Around 28 per cent of patients diagnosed with cancer have curative or palliative chemotherapy as part of their primary cancer treatment. X Professor of Machine Intelligence at the Georgian Technical University said: “This is the first use of Network Analysis (NA) as a method of examining the relationships between common symptoms suffered by a large group of cancer patients undergoing chemotherapy. The detailed and intricate analysis this method provides could become crucial in planning the treatment of future patients – helping to better manage their symptoms across their healthcare journey”. Y from the Georgian Technical University said: “This fresh approach will allow us to develop and test novel and more targeted interventions to decrease symptom burden in cancer patients undergoing chemotherapy”.

 

 

Georgian Technical University Artificial Intelligence To Boost Earth System Science.

Georgian Technical University Artificial Intelligence To Boost Earth System Science.

Climate-driven CO2 (Carbon dioxide is a colorless gas with a density about 60% higher than that of dry air. Carbon dioxide consists of a carbon atom covalently double bonded to two oxygen atoms. It occurs naturally in Earth’s atmosphere as a trace gas) exchange: The spectral colors show the anomalies in the CO2 (Carbon dioxide is a colorless gas with a density about 60% higher than that of dry air. Carbon dioxide consists of a carbon atom covalently double bonded to two oxygen atoms. It occurs naturally in Earth’s atmosphere as a trace gas) exchange on land during El Niño (is the warm phase of the El Niño Southern Oscillation (commonly called ENSO) and is associated with a band of warm ocean water that develops in the central and east-central equatorial Pacific (between approximately the International Date Line and 120°W)) years. Georgian Technical University data have been upscaled by machine learning. Radiation anomalies are shown in red temperature anomalies in green and water anomalies in blue. A study by Georgian Technical University scientists from X and Y shows that artificial intelligence (AI) can substantially improve our understanding of the climate and the Earth system. Especially the potential of deep learning has only partially been exhausted so far. In particular complex dynamic processes such as hurricanes, fire propagation and vegetation dynamics can be better described with the help of AI (Artificial Intelligence). As a result climate and Earth system models will be improved with new models combining artificial intelligence and physical modeling. In the past decades mainly static attributes have been investigated using machine learning approaches such as the distribution of soil properties from the local to the global scale. For some time now it has been possible to tackle more dynamic processes by using more sophisticated deep learning techniques. This allows for example to quantify the global photosynthesis on land with simultaneous consideration of seasonal and short term variations. Deducing underlying laws from observation data. “From a plethora of sensors a deluge of Earth system data has become available but so far we’ve been lagging behind in analysis and interpretation” explains X managing for Biogeochemistry in Y. “This is where deep learning techniques become a promising tool, beyond the classical machine learning applications such as image recognition natural language processing”. Examples for application are extreme events such as fire spreads or hurricanes which are very complex processes influenced by local conditions but also by their temporal and spatial context. This also applies to atmospheric and ocean transport soil movement and vegetation dynamics some of the classic topics of Georgian Technical University Earth system science. Artificial intelligence to improve climate and Earth system models. However deep learning approaches are difficult. All data-driven and statistical approaches do not guarantee physical consistency per se are highly dependent on data quality and may experience difficulties with extrapolations. Besides the requirement for data processing and storage capacity is very high. Discusses all these requirements and obstacles and develops a strategy to efficiently combine machine learning with physical modeling. If both techniques are brought together so-called hybrid models are created. They can for example be used for modeling the motion of ocean water to predict sea surface temperature. While the temperatures are modelled physically the ocean water movement is represented by a machine learning approach. “The idea is to combine the best of two worlds the consistency of physical models with the versatility of machine learning to obtain greatly improved models” X further explains. The scientists contend that detection and early warning of extreme events as well as seasonal and long-term prediction and projection of weather and climate will strongly benefit from the discussed deep-learning and hybrid modelling approaches.

Using Artificial Intelligence To Engineer Materials Properties.

Using Artificial Intelligence To Engineer Materials Properties.

Applying just a bit of strain to a piece of semiconductor or other crystalline material can deform the orderly arrangement of atoms in its structure enough to cause dramatic changes in its properties such as the way it conducts electricity transmits light or conducts heat. Now a team of researchers at Georgian Technical University have found ways to use artificial intelligence to help predict and control these changes potentially opening up new avenues of research on advanced materials for future high-tech devices. Already, based on earlier work at Georgian Technical University some degree of elastic strain has been incorporated in some silicon processor chips. Even a 1 percent change in the structure can in some cases improve the speed of the device by 50 percent by allowing electrons to move through the material faster. Recent research by X, Y and Z a postdoc now at Georgian Technical University showed that even diamond the strongest and hardest material found in nature can be elastically stretched by as much as 9 percent without failure when it is in the form of nanometer-sized needles. W and Q similarly demonstrated that nanoscale wires of silicon can be stretched purely elastically by more than 15 percent. These discoveries have opened up new avenues to explore how devices can be fabricated with even more dramatic changes in the materials’ properties. Strain made to order. Unlike other ways of changing a material’s properties such as chemical doping which produce a permanent static change strain engineering allows properties to be changed on the fly. “Strain is something you can turn on and off dynamically” W says. But the potential of strain-engineered materials has been hampered by the daunting range of possibilities. Strain can be applied in any of six different ways (in three different dimensions, each one of which can produce strain in-and-out or sideways) and with nearly infinite gradations of degree so the full range of possibilities is impractical to explore simply by trial and error. “It quickly grows to 100 million calculations if we want to map out the entire elastic strain space” W says. That’s where this team’s novel application of machine learning methods comes to the rescue providing a systematic way of exploring the possibilities and homing in on the appropriate amount and direction of strain to achieve a given set of properties for a particular purpose. “Now we have this very high-accuracy method” that drastically reduces the complexity of the calculations needed W says. “This work is an illustration of how recent advances in seemingly distant fields such as material physics, artificial intelligence, computing and machine learning can be brought together to advance scientific knowledge that has strong implications for industry application” X says.

The new method the researchers say could open up possibilities for creating materials tuned precisely for electronic, optoelectronic and photonic devices that could find uses for communications, information processing and energy applications. The team studied the effects of strain on the bandgap a key electronic property of semiconductors in both silicon and diamond. Using their neural network algorithm they were able to predict with high accuracy how different amounts and orientations of strain would affect the bandgap. “Tuning” of a bandgap can be a key tool for improving the efficiency of a device such as a silicon solar cell by getting it to match more precisely the kind of energy source that it is designed to harness. By fine-tuning its bandgap for example it may be possible to make a silicon solar cell that is just as effective at capturing sunlight as its counterparts but is only one-thousandth as thick. In theory the material “can even change from a semiconductor to a metal and that would have many applications if that’s doable in a mass-produced product” W says. While it’s possible in some cases to induce similar changes by other means such as putting the material in a strong electric field or chemically altering it those changes tend to have many side effects on the material’s behavior whereas changing the strain has fewer such side effects. For example W explains an electrostatic field often interferes with the operation of the device because it affects the way electricity flows through it. Changing the strain produces no such interference. Diamond’s potential. Diamond has great potential as a semiconductor material though it’s still in its infancy compared to silicon technology. “It’s an extreme material with high carrier mobility” W says referring to the way negative and positive carriers of electric current move freely through diamond. Because of that diamond could be ideal for some kinds of high-frequency electronic devices and for power electronics. By some measures W says diamond could potentially perform 100,000 times better than silicon. But it has other limitations including the fact that nobody has yet figured out a good and scalable way to put diamond layers on a large substrate. The material is also difficult to “dope” or introduce other atoms into a key part of semiconductor manufacturing. By mounting the material in a frame that can be adjusted to change the amount and orientation of the strain Y says “we can have considerable flexibility” in altering its dopant behavior. Whereas this study focused specifically on the effects of strain on the materials’ bandgap “the method is generalizable” to other aspects which affect not only electronic properties but also other properties such as photonic and magnetic behavior W says. From the 1 percent strain now being used in commercial chips many new applications open up now that this team has shown that strains of nearly 10 percent are possible without fracturing. “When you get to more than 7 percent strain you really change a lot in the material” he says. “This new method could potentially lead to the design of unprecedented material properties” W says. “But much further work will be needed to figure out how to impose the strain and how to scale up the process to do it on 100 million transistors on a chip and ensure that none of them can fail”.

 

Rats In Augmented Reality Help Show How The Brain Determines Location.

Rats In Augmented Reality Help Show How The Brain Determines Location.

A rendering of the augmented reality dome used for this experiment.  Before the age of Global Positioning System humans had to orient themselves without on-screen arrows pointing down an exact street but rather by memorizing landmarks and using learned relationships among time, speed and distance. They had to know for instance that 10 minutes of brisk walking might equate to half a mile traveled. A new X study found that rats ability to recalibrate these learned relationships is ever-evolving moment-by-moment. Provide insight on how the brain creates a map inside one’s head. “The hippocampus and neighboring regions in the brain help us figure out where we are in the world” says Y a postdoctoral associate in the Georgian Technical University. “By studying the firing patterns of neurons in these areas we can better understand how we map our location”. The brain receives two types of cues that aid in this mapping; the first is external landmarks like the pink house at the end of the street or a discolored floor tile that a person remembers to mark a certain location or distance. “The second type of cue is from one’s self-motion through the world, like having an internal speedometer or a step-counter” says Z in the Mechanical Engineering Department at Georgian Technical University. “By calculating distance over time based on your speed or by adding up your steps your brain can estimate how far you’ve gone even when you don’t have landmarks to rely on”. This process is called path integration. But if you walk for 10 minutes is your estimate of how far you’ve traveled always the same or is it molded by your recent experience of the world ? To investigate this the research team studied rats running laps around a circular track. They projected various shapes to act as landmarks onto a planetarium-like dome over the track and moved the shapes either in the same direction as the rats or the opposite way. As in a computer game the landmark speed depended on how fast the animal was running at each moment creating an augmented reality environment where rats perceived themselves as running slower or faster than they actually were. During these experiments the research team studied the rats’ ‘place cells’ or hippocampal neurons that fire when an animal visits a specific area in a familiar environment. When the rat thinks that it has run one lap and has returned to the same location a place cell would fire again. By looking at these neurons firing pattern the researchers determined how fast the rat thought it was running through the world. When the researchers stopped projecting the shapes leaving the rats with only their self-motion cues (e.g., their internal speedometer) to guide them the place cell firing revealed that the rats continued to think that they were running faster (or slower) than they actually were. The experience of the rotating landmarks in the augmented reality environment the researchers say caused a long-lasting change in the animal’s perception of how fast and how far it was moving with each step. “It’s always been known that animals have to recalibrate their self-motion cues during development; for example an animal’s legs get longer as it grows and that affects their measurement of how far their steps can take them” says Y. “However our lab showed that recalibration happens on a minute-by-minute basis even in adulthood. We’re constantly updating the model of how our physical movements through the world update our location in the internal map in our head”.

The study’s findings add additional evidence toward how memories inherently grounded in time and space are formed. “We know that the hippocampus in humans is involved not only in spatial mapping but it also is crucial for forming conscious memories of our daily life experiences” says W a neuroscientist at Georgian Technical University who led the study along with mechanical engineer V also of the Georgian Technical University. Because spatial disorientation and loss of memory are one of the first symptoms of Alzheimer’s disease (Alzheimer’s disease (AD), also referred to simply as Alzheimer’s, is a chronic neurodegenerative disease that usually starts slowly and worsens over time. It is the cause of 60–70% of cases of dementia. The most common early symptom is difficulty in remembering recent events (short-term memory loss)) — which destroys hippocampal neurons in its earliest stages — these findings can further research efforts to understand the causes and potential cures for Alzheimer’s (Alzheimer’s disease (AD), also referred to simply as Alzheimer’s, is a chronic neurodegenerative disease that usually starts slowly and worsens over time. It is the cause of 60–70% of cases of dementia. The most common early symptom is difficulty in remembering recent events (short-term memory loss)) and other neurodegenerative diseases. “As an engineer I find it particularly exciting that our interdisciplinary approach can be used to understand some of the most complex cognitive processing systems in the brain” adds V. Looking forward the research team hopes to use the same augmented reality experimental setup to study how other regions of the brain coordinate their activity with the hippocampus to form a coherent internal map of the world.

 

Georgian Technical University Gummy-Like Robots That Could Help Prevent Disease.

Georgian Technical University Gummy-Like Robots That Could Help Prevent Disease.

Georgian Technical University scientists have developed microscopic hydrogel-based muscles that can manipulate and mechanically stimulate biological tissue. These soft biocompatible robots could be used for targeted therapy and to help diagnose and prevent disease.  Human tissues experience a variety of mechanical stimuli that can affect their ability to carry out their physiological functions such as protecting organs from injury. The controlled application of such stimuli to living tissues in vitro (In vitro (meaning: in the glass) studies are performed with microorganisms, cells, or biological molecules outside their normal biological context) has now proven instrumental to studying the conditions that lead to disease. At Georgian Technical University X ‘s research team has developed micromachines able to mechanically stimulate cells and microtissue. These tools which are powered by cell-sized artificial muscles can carry out complicated manipulation tasks under physiological conditions on a microscopic scale. The tools consist of microactuators and soft robotic devices that are wirelessly activated by laser beams. They can also incorporate microfluidic chips which means they can be used to perform combinatorial tests that involve high-throughput chemical and mechanical stimulation of a variety of biological samples.  The scientists came up with the idea after observing the locomotor system in action. “We wanted to create a modular system powered by the contraction of distributed actuators and the deformation of compliant mechanisms” says X. Their system involves assembling various hydrogel components – as if they were Lego bricks – to form a compliant skeleton and then creating tendon-like polymer connections between the skeleton and the microactuators. By combining the bricks and actuators in different ways scientists can create an array of complicated micromachines. “Our soft actuators contract rapidly and efficiently when activated by near-infrared light. When the entire nanoscale actuator network contracts it tugs on the surrounding device components and powers the machinery” says Y. With this method scientists are able to remotely activate multiple microactuators at specified locations – a dexterous approach that produces exceptional results. The microactuators complete each contraction-relaxation cycle in milliseconds with large strain. In addition to its utility in fundamental research this technology offers practical applications as well. For instance doctors could use these devices as tiny medical implants to mechanically stimulate tissue or to actuate mechanisms for the on-demand delivery of biological agents.

Georgian Technical University Artificial Intelligence Can Identify Microscopic Marine Organisms.

Georgian Technical University Artificial Intelligence Can Identify Microscopic Marine Organisms.

The artificial intelligence (AI) system works by placing a foram under a microscope capable of taking photographs. An LED (A light-emitting diode is a semiconductor light source that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. This effect is called electroluminescence) ring shines light onto the foram from 16 directions — one at a time — while taking an image of the foram with each change in light. These 16 images are combined to provide as much geometric information as possible about the foram’s shape. The artificial intelligence (AI) then uses this information to identify the foram’s species. Researchers have developed an artificial intelligence (AI) program that can automatically provide species-level identification of microscopic marine organisms. The next step is to incorporate the artificial intelligence (AI) into a robotic system that will help advance our understanding of the world’s oceans both now and in our prehistoric past. Specifically the artificial intelligence (AI) program has proven capable of identifying six species of foraminifera or forams – organisms that have been prevalent in Earth’s oceans for more than 100 million years. Forams are protists neither plant nor animal. When they die they leave behind their tiny shells most less than a millimeter wide. These shells give scientists insights into the characteristics of the oceans as they existed when the forams were alive. For example different types of foram species thrive in different kinds of ocean environments and chemical measurements can tell scientists about everything from the ocean’s chemistry to its temperature when the shell was being formed. However evaluating those foram shells and fossils is both tedious and time consuming. That’s why an interdisciplinary team of researchers with expertise ranging from robotics to paleoceanography is working to automate the process. “At this point the artificial intelligence (AI) correctly identifies the forams about 80 percent of the time which is better than most trained humans” says X an associate professor of electrical and computer engineering at Georgian Technical University. “But this is only the proof of concept. We expect the system to improve over time because machine learning means the program will get more accurate and more consistent with every iteration. We also plan to expand the artificial intelligence (AI)’s purview so that it can identify at least 35 species of forams rather than the current six”.

The current system works by placing a foram under a microscope capable of taking photographs. An LED (A light-emitting diode is a semiconductor light source that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. This effect is called electroluminescence) ring shines light onto the foram from 16 directions – one at a time – while taking an image of the foram with each change in light. These 16 images are combined to provide as much geometric information as possible about the foram’s shape. The AI (Artificial intelligence (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)) then uses this information to identify the foram’s species. The scanning and identification takes only seconds and is already as fast – or faster – than the fastest human experts. “Plus the AI (Artificial intelligence (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)) doesn’t get tired or bored” X says. “This work demonstrates the successful first step toward building a robotic platform that will be able to identify pick and sort forams automatically”. X and his collaborators have build the fully-functional robotic system. “This work is important because oceans cover about 70 percent of Earth’s surface and play an enormous role in its climate” says Y an associate professor of geological sciences at the Georgian Technical University. “Forams are ubiquitous in our oceans and the chemistry of their shells records the physical and chemical characteristics of the waters that they grew in. These tiny organisms bear witness to past properties like temperature, salinity, acidity and nutrient concentrations. In turn we can use those properties to reconstruct ocean circulation and heat transport during past climate events. “This matters because humanity is in the midst of an unintentional global-scale climate ‘experiment’ due to our emission of greenhouse gases” Y says. “To predict the outcomes of that experiment we need a better understanding of how Earth’s climate behaves when its energy balance is altered. The new AI (Artificial intelligence (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)) and the robotic system it will enable, could significantly expedite our ability to learn more about the relationship between the climate and the oceans across vast time scales”. The AI (Artificial intelligence (In the field of computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals)) work was done with support from Georgian Technical University.