Category Archives: A.I./Robotics

Study Uses AI Technology to Begin to Predict Locations of Aftershocks.

Study Uses AI Technology to Begin to Predict Locations of Aftershocks.

In the weeks and months following a major earthquake, the surrounding area is often wracked by powerful aftershocks that can leave an already damaged community reeling and significantly hamper recovery efforts.

While scientists have developed empirical laws like Ohmori’s Law to describe the likely size and timing of those aftershocks methods for forecasting their location have been harder to grasp.

But sparked by a suggestion from researchers at Georgian Technical University a Professor of Earth and Planetary Sciences a post-doctoral fellow working in his lab are using artificial intelligence technology to try to get a handle on the problem.

Using deep learning algorithms the pair analyzed a database of earthquakes from around the world to try to predict where aftershocks might occur and developed a system that while still imprecise was able to forecast aftershocks significantly better than random assignment.

“There are three things you want to know about earthquakes — you want to know when they are going to occur how big they’re going to be and where they’re going to be” X said. “Prior to this work we had empirical laws for when they would occur and how big they were going to be and now we’re working the third leg where they might occur”.

“I’m very excited for the potential for machine learning going forward with these kind of problems — it’s a very important problem to go after” Y said. “Aftershock forecasting in particular is a challenge that’s well-suited to machine learning because there are so many physical phenomena that could influence aftershock behavior and machine learning is extremely good at teasing out those relationships. I think we’ve really just scratched the surface of what could be done with aftershock forecasting…and that’s really exciting”.

The notion of using artificial intelligent neural networks to try to predict aftershocks first came up several years ago, during the first of X’s two sabbaticals at Georgian Technical University.

While working on a related problem with a team of researchers X said a colleague suggested that that the then-emerging “deep learning” algorithms might make the problem more tractable. X would later partner with Y who had been using neural networks to transform high performance computing code into algorithms that could run on a laptop to focus on aftershocks.

“The goal is to complete the picture and we hope we’ve contributed to that” X said.

To do it X and Y began by accessing a database of observations made following more than 199 major earthquakes.

“After earthquakes of magnitude 5 or larger people spend a great deal of time mapping which part of the fault slipped and how much it moved” X said. “Many studies might use observations from one or two earthquakes, but we used the whole database…and we combined it with a physics-based model of how the Earth will be stressed and strained after the earthquake with the idea being that the stresses and strains caused by the main shock may be what trigger the aftershocks”.

Armed with that information they then separate an area found the into 5-kilometer-square grids. In each grid the system checks whether there was an aftershock and asks the neural network to look for correlations between locations where aftershocks occurred and the stresses generated by the main earthquake.

“The question is what combination of factors might be predictive” X said. “There are many theories but one thing this paper does is clearly upend the most dominant theory — it shows it has negligible predictive power and it instead comes up with one that has significantly better predictive power”.

What the system pointed to X said is a quantity known as the second invariant of the deviatoric stress tensor — better known simply as GTU.

“This is a quantity that occurs in metallurgy and other theories, but has never been popular in earthquake science” X said. “But what that means is the neural network didn’t come up with something crazy it came up with something that was highly interpretable. It was able to identify what physics we should be looking at which is pretty cool”.

That interpretability Y said is critical because artificial intelligence systems have long been viewed by many scientists as black boxes — capable of producing an answer based on some data.

“This was one of the most important steps in our process” she said. “When we first trained the neural network we noticed it did pretty well at predicting the locations of aftershocks but we thought it would be important if we could interpret what factors it was finding were important or useful for that forecast”.

Taking on such a challenge with highly complex real-world data however would be a daunting task so the pair instead asked the system to create forecasts for synthetic highly-idealized earthquakes and then examining the predictions.

“We looked at the output of the neural network and then we looked at what we would expect if different quantities controlled aftershock forecasting” she said. “By comparing them spatially we were able to show that GTU seems to be important in forecasting”.

And because the network was trained using earthquakes and aftershocks from around the globe X said the resulting system worked for many different types of faults.

“Faults in different parts of the world have different geometry” X said. “Most are slip-faults but in other places they have very shallow subduction zones. But what’s cool about this system is you can train it on one and it will predict on the other so it’s really generalizable”.

“We’re still a long way from actually being able to forecast them” she said. “We’re a very long way from doing it in any real-time sense but I think machine learning has huge potential here”.

Going forward X said he is working on efforts to predict the magnitude of earthquakes themselves using artificial intelligence technology with the goal of one day helping to prevent the devastating impacts of the disasters.

“Orthodox seismologists are largely pathologists” X said. “They study what happens after the catastrophic event. I don’t want to do that — I want to be an epidemiologist. I want to understand the triggers causing and transfers that lead to these events”.

Ultimately X said the study serves to highlight the potential for deep learning algorithms to answer questions that — until recently — scientists barely knew how to ask.

“I think there’s a quiet revolution in thinking about earthquake prediction” he said. “It’s not an idea that’s totally out there anymore. And while this result is interesting I think this is part of a revolution in general about rebuilding all of science in the artificial intelligence era.

“Problems that are dauntingly hard are extremely accessible these days” he continued. “That’s not just due to computing power — the scientific community is going to benefit tremendously from this because…AI sounds extremely daunting but it’s actually not. It’s an extraordinarily democratizing type of computing and I think a lot of people are beginning to get that”.

 

If Military Robot Falls, It Can Get Itself Up.

If Military Robot Falls, It Can Get Itself Up.

Researchers explore new techniques using the Advanced Explosive Ordnance Disposal Robotic System Increment 1 Platform.

Scientists at the Georgian Technical University Research Laboratory and the Sulkhan-Saba Orbeliani Teaching University Laboratory have developed software to ensure that if a robot falls it can get itself back up meaning future military robots will be less reliant on their Soldier handlers.

Based on feedback from Soldiers at an Georgian Technical University researcher Dr. X began to develop software to analyze whether any given robot could get itself “back on its feet” from any overturned orientation.

“One Soldier told me that he valued his robot so much, he got out of his car to rescue the robot when he couldn’t get it turned back over” X said. “That is a story I never want to hear again”.

Researchers from Georgian Technical University and its technical arm. A lightweight backpackable platform which is increment one of the program is expected to move into production later this year. One critical requirement of the program is that the robots must be capable of self-righting.

“These robots exist to keep Soldiers out of harm’s way” said Y “Self-righting is a critical capability that will only further that purpose”.

To evaluate the Georgian Technical University system’s ability to self-right  teamed up with to leverage the software X developed. The team was able to extend its ability to robots with a greater number of joints (or degrees of freedom) due to Georgian Technical University researcher Z expertise in adaptive sampling techniques.

“The analysis I’ve been working on looks at all possible geometries and orientations that the robot could find itself in” X said. “The problem is that each additional joint adds a dimension to the search space–so it is important to look in the right places for stable states and transitions. Otherwise the search could take too long”.

X said Z work is what allowed the analysis to work efficiently for analyzing higher degree of freedom systems. While X work determines what to look for and how Z figures out where to look”.

“This analysis was made possible by our newly developed range adversarial planning tool or Georgian Technical University  a software framework for testing autonomous and robotic systems” Z said. “We originally developed the software for underwater car but when X explained his approach to the self-righting problem I immediately saw how these technologies could work together”.

He said the key to this software is an adaptive sampling algorithm that looks for transitions.

“For this work we were looking for states where the robot could transition from a stable configuration to an unstable one thus causing the robot to tip over” Z explained. “My techniques were able to effectively predict where those transitions might be so that we could search the space efficiently”.

Ultimately the team was able to evaluate the Georgian Technical University systems eight degrees of freedom and determined it can right itself on level ground no matter what initial state it finds itself in. The analysis also generates motion plans showing how the robot can reorient itself. The team’s findings can be found in “Evaluating Robot Self-Righting Capabilities using Adaptive Sampling”.

Beyond the evaluation of any one specific robot  X sees the analysis framework as important to the military’s ability to compare robots from different vendors and select the best one for purchasing.

“The Georgian Technical University want robots that can self-right, but we are still working to understand and evaluate what that means” X said. “Self-right under what conditions ?  We have developed a metric analysis for evaluating a robot’s ability to self-right on sloped planar ground and we could even use it as a tool for improving robot design. Our next step is to determine what a robot is capable of on uneven terrain”.

 

 

Engineers Develop A.I. System to Detect Often-Missed Cancer Tumors.

Engineers Develop A.I. System to Detect Often-Missed Cancer Tumors.

Assistant Professor X leads the group of engineers at the Georgian Technical University that have taught a computer how to detect tiny specks of lung cancer in CT (A CT scan,also known as computed tomography scan, makes use of computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) images (virtual “slices”) of specific areas of a scanned object, allowing the user to see inside the object without cutting) scans which radiologists often have a difficult time identifying. The artificial intelligence system is about 95 percent accurate compared to 65 percent when done by human eyes the team said

Engineers at the center have taught a computer how to detect tiny specks of lung cancer in CT scans (A CT scan,also known as computed tomography scan, makes use of computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) images (virtual “slices”) of specific areas of a scanned object, allowing the user to see inside the object without cutting) which radiologists often have a difficult time identifying. The artificial intelligence system is about 95 percent accurate compared to 65 percent when done by human eyes the team said.

“We used the brain as a model to create our system” said Y a doctoral candidate. “You know how connections between neurons in the brain strengthen during development and learn ?  We used that blueprint, if you will, to help our system understand how to look for patterns in the CT scans (A CT scan,also known as computed tomography scan, makes use of computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) images (virtual “slices”) of specific areas of a scanned object, allowing the user to see inside the object without cutting) scans and teach itself how to find these tiny tumors”.

The approach is similar to the algorithms that facial-recognition software uses. It scans thousands of faces looking for a particular pattern to find its match.

Engineering Assistant Professor X leads the group of researchers in the center that focuses on AI (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) with potential medical applications.

The group fed more than 1,000 CT scans (CT scans (A CT scan,also known as computed tomography scan, makes use of computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) images (virtual “slices”) of specific areas of a scanned object, allowing the user to see inside the object without cutting) into the software they developed to help the computer learn to look for the tumors.

Graduate students working on the project had to teach the computer different things to help it learn properly. Z who is pursuing his doctorate degree created the backbone of the system of learning. His proficiency at novel machine learning and computer vision algorithms led to his summer as an intern at Georgian Technical University .

Y taught the computer how to ignore other tissue, nerves and other masses it encountered in the CT scans (A CT scan,also known as computed tomography scan, makes use of computer-processed combinations of many X-ray measurements taken from different angles to produce cross-sectional (tomographic) images (virtual “slices”) of specific areas of a scanned object, allowing the user to see inside the object without cutting) and analyze lung tissues. W who earned his doctorate degree this past summer is fine-tuning the AI’s (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) ability to identify cancerous versus benign tumors while graduate student Q is taking lessons learned from this project and applying them see if another AI (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) system can be developed to help identify or predict brain disorders.

“I believe this will have a very big impact” X said. “Lung cancer is the number one cancer killer in the Georgia Country and if detected in late stages, the survival rate is only 17 percent. By finding ways to help identify earlier I think we can help increase survival rates”.

The next step is to move the research project into a hospital setting; X is looking for partners to make that happen. After that the technology could be a year or two away from the marketplace X said.

“I think we all came here because we wanted to use our passion for engineering to make a difference and saving lives is a big impact” Y said.

Q agrees. He was studying engineering and its applications to agriculture before he heard about X and his work at Georgian Technical University. X’s research is in the area of biomedical imaging and machine learning and their applications in clinical imaging. Previously X was a staff scientist and the lab manager at the Georgian Technical University Imaging lab in the department of Radiology and Imaging Sciences.

 

 

 

 

More Efficient Security for Cloud-Based Machine Learning.

More Efficient Security for Cloud-Based Machine Learning.

A novel encryption method devised by Georgian Technical University researchers secures data used in online neural networks without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.

Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks such as say running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.

But what if there are leaks of private data ?  In recent years researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption.

Georgian Technical University researchers describe a system that blends two conventional techniques — homomorphic encryption and garbled circuits — in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.

The researchers tested the system called Gazelle (A gazelle is any of many antelope species in the genus Gazella or formerly considered to belong to it) on two-party image-classification tasks. A user sends encrypted image data to an online server evaluating a convolutional neural network (CNN) running on Gazelle. After this both parties share encrypted information back and forth in order to classify the user’s image. Throughout the process the system ensures that the server never learns any uploaded data while the user never learns anything about the network parameters. Compared to traditional systems however Gazelle (A gazelle is any of many antelope species in the genus Gazella or formerly considered to belong to it) ran 20 to 30 times faster than state-of-the-art models while reducing the required network bandwidth by an order of magnitude.

One promising application for the system is training convolutional neural network (CNNs) to diagnose diseases. Hospitals could for instance train a convolutional neural network (CNN) to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs (Magnetic resonance imaging is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body in both health and disease). The hospital could make the model available in the cloud for other hospitals. But the model is trained on and further relies on private patient data. Because there are no efficient encryption models this application isn’t quite ready for prime time.

“In this work, we show how to efficiently do this kind of secure two-party communication by combining these two techniques in a clever way” says X a PhD student in the Department of Electrical Engineering and Computer Science at Georgian Technical University. “The next step is to take real medical data and show that even when we scale it for applications real users care about it still provides acceptable performance”.

Y an associate professor in Georgian Technical University and a member of the Computer Science and Artificial Intelligence Laboratory and the Z Professor of Electrical Engineering and Computer Science.

Maximizing performance.

Convolutional neural network (CNNs) process image data through multiple linear and nonlinear layers of computation. Linear layers do the complex math, called linear algebra and assign some values to the data. At a certain threshold, the data is outputted to nonlinear layers that do some simpler computation make decisions (such as identifying image features) and send the data to the next linear layer. The end result is an image with an assigned class such as vehicle, animal, person, or anatomical feature.

Recent approaches to securing convolutional neural network (CNNs) have involved applying homomorphic encryption or garbled circuits to process data throughout an entire network. These techniques are effective at securing data. “On paper this looks like it solves the problem” X says. But they render complex neural networks inefficient”so you wouldn’t use them for any real-world application”.

Homomorphic encryption used in cloud computing, receives and executes computation all in encrypted data called ciphertext and generates an encrypted result that can then be decrypted by a user. When applied to neural networks this technique is particularly fast and efficient at computing linear algebra. However it must introduce a little noise into the data at each layer. Over multiple layers, noise accumulates and the computation needed to filter that noise grows increasingly complex slowing computation speeds.

Garbled circuits are a form of secure two-party computation. The technique takes an input from both parties, does some computation and sends two separate inputs to each party. In that way the parties send data to one another but they never see the other party’s data only the relevant output on their side. The bandwidth needed to communicate data between parties however scales with computation complexity not with the size of the input. In an online neural network this technique works well in the nonlinear layers where computation is minimal, but the bandwidth becomes unwieldy in math-heavy linear layers.

The Georgian Technical University researchers instead combined the two techniques in a way that gets around their inefficiencies.

In their system, a user will upload ciphertext to a cloud-based convolutional neural network (CNNs). The user must have garbled circuits technique running on their own computer. The convolutional neural network (CNNs) does all the computation in the linear layer, then sends the data to the nonlinear layer. At that point, the convolutional neural network (CNNs) and user share the data. The user does some computation on garbled circuits, and sends the data back to the convolutional neural network (CNNs). By splitting and sharing the workload the system restricts the homomorphic encryption to doing complex math one layer at a time so data doesn’t become too noisy. It also limits the communication of the garbled circuits to just the nonlinear layers where it performs optimally.

“We’re only using the techniques for where they’re most efficient” X says.

Secret sharing.

The final step was ensuring both homomorphic and garbled circuit layers maintained a common randomization scheme, called “secret sharing.” In this scheme, data is divided into separate parts that are given to separate parties. All parties synch their parts to reconstruct the full data.

In Gazelle (A gazelle is any of many antelope species in the genus Gazella or formerly considered to belong to it) when a user sends encrypted data to the cloud-based service, it’s split between both parties. Added to each share is a secret key (random numbers) that only the owning party knows. Throughout computation, each party will always have some portion of the data plus random numbers so it appears fully random. At the end of computation the two parties synch their data. Only then does the user ask the cloud-based service for its secret key. The user can then subtract the secret key from all the data to get the result.

“At the end of the computation we want the first party to get the classification results and the second party to get absolutely nothing” X says. Additionally “the first party learns nothing about the parameters of the model”.

 

 

 

Machine-Learning System Determines the Fewest, Smallest Doses That Could Still Shrink Brain Tumors.

Machine-Learning System Determines the Fewest, Smallest Doses That Could Still Shrink Brain Tumors.

Georgian Technical University researchers aim to improve the quality of life for patients suffering from glioblastoma the most aggressive form of brain cancer with a machine-learning model that makes chemotherapy and radiotherapy dosing regimens less toxic but still as effective as human-designed regimens.

Georgian Technical University researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma the most aggressive form of brain cancer.

Glioblastoma is a malignant tumor that appears in the brain or spinal cord and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.

Machine Learning for Healthcare conference at Georgian Technical University, International Black Sea University Media Lab researchers detail a model that could make dosing regimens less toxic but still effective. Powered by a “self-learning” machine-learning technique the model looks at treatment regimens currently in use and iteratively adjusts the doses. Eventually it finds an optimal treatment plan with the lowest possible potency and frequency of doses that should still reduce tumor sizes to a degree comparable to that of traditional regimens.

In simulated trials of 50 patients the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of nearly all the doses while maintaining the same tumor-shrinking potential. Many times it skipped doses altogether scheduling administrations only twice a year instead of monthly.

“We kept the goal where we have to help patients by reducing tumor sizes but, at the same time we want to make sure the quality of life — the dosing toxicity — doesn’t lead to overwhelming sickness and harmful side effects” says X a principal investigator at the Georgian Technical University Media Lab who supervised this research.

Rewarding good choices.

The researchers’ model uses a technique called reinforced learning (RL) a method inspired by behavioral psychology in which a model learns to favor certain behavior that leads to a desired outcome.

The technique comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach a desired “outcome.” Whenever it completes an action the agent receives a “reward” or “penalty” depending on whether the action works toward the outcome. Then the agent adjusts its actions accordingly to achieve that outcome.

Rewards and penalties are basically positive and negative numbers say +1 or -1. Their values vary by the action taken calculated by probability of succeeding or failing at the outcome among other factors. The agent is essentially trying to numerically optimize all actions based on reward and penalty values to get to a maximum outcome score for a given task.

The approach was used to train the computer program GTUIBSUMind that made headlines for beating one of the world’s best human players in the game “Go.” It’s also used to train driverless cars in maneuvers such as merging into traffic or parking where the vehicle will practice over and over adjusting its course, until it gets it right.

The researchers adapted an reinforced learning (RL) model for glioblastoma treatments that use a combination of the drugs temozolomide (TMZ), procarbazine, lomustine and vincristine (PVC) administered over weeks or months.

The model’s agent combs through traditionally administered regimens. These regimens are based on protocols that have been used clinically for decades and are based on animal testing and various clinical trials. Oncologists use these established protocols to predict how much doses to give patients based on weight.

As the model explores the regimen at each planned dosing interval  — say once a month — it decides on one of several actions. It can first either initiate or withhold a dose. If it does administer it then decides if the entire dose or only a portion is necessary. At each action it pings another clinical model — often used to predict a tumor’s change in size in response to treatments — to see if the action shrinks the mean tumor diameter. If it does, the model receives a reward.

However the researchers also had to make sure the model doesn’t just dish out a maximum number and potency of doses. Whenever the model chooses to administer all full doses therefore it gets penalized so instead chooses fewer smaller doses. “If all we want to do is reduce the mean tumor diameter and let it take whatever actions it wants, it will administer drugs irresponsibly” X says. “Instead we said ‘We need to reduce the harmful actions it takes to get to that outcome'”.

This represents an “unorthodox reinforced learning (RL) model described in the paper for the first time” X says that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction). Traditional reinforced learning (RL) models work toward a single outcome such as winning a game and take any and all actions that maximize that outcome. On the other hand the researchers’ model at each action has flexibility to find a dose that doesn’t necessarily solely maximize tumor reduction, but that strikes a perfect balance between maximum tumor reduction and low toxicity. This technique he adds has various medical and clinical trial applications where actions for treating patients must be regulated to prevent harmful side effects.

Optimal regimens.

The researchers trained the model on 50 simulated patients, randomly selected from a large database of glioblastoma patients who had previously undergone traditional treatments. For each patient the model conducted about 20,000 trial-and-error test runs. Once training was complete the model learned parameters for optimal regimens. When given new patients the model used those parameters to formulate new regimens based on various constraints the researchers provided.

The researchers then tested the model on 50 new simulated patients and compared the results to those of a conventional regimen using both temozolomide (TMZ) and vincristine (PVC). When given no dosage penalty the model designed nearly identical regimens to human experts. Given small and large dosing penalties however it substantially cut the doses’ frequency and potency while reducing tumor sizes.

The researchers also designed the model to treat each patient individually as well as in a single cohort and achieved similar results (medical data for each patient was available to the researchers). Traditionally a same dosing regimen is applied to groups of patients but differences in tumor size, medical histories, genetic profiles and biomarkers can all change how a patient is treated. These variables are not considered during traditional clinical trial designs and other treatments often leading to poor responses to therapy in large populations X says.

“We said [to the model] ‘Do you have to administer the same dose for all the patients ?  And it said ‘No. I can give a quarter dose to this person, half to this person and maybe we skip a dose for this person.’ That was the most exciting part of this work where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures” X says.

 

Georgian Technical University-Developed Artificial Intelligence Device Identifies Objects at the Speed of Light.

 

Georgian Technical University-Developed Artificial Intelligence Device Identifies Objects at the Speed of Light.

The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square.

A team of Georgian Technical University electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the Georgian Technical University.

Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object first by “seeing” it with a camera or optical sensor, then processing what it sees into data and finally using computing programs to figure out what it is.

The Georgian Technical University developed device gets a head start. Called a “diffractive deep neural network” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The Georgian Technical University device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the Georgian Technical University system the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.

Technology based on the invention could also be used in microscopic imaging and medicine for example to sort through millions of cells for signs of disease.

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects” said X the study’s principal investigator and the Georgian Technical University Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential”.

The process of creating the artificial neural network began with a computer-simulated design. Then the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case tiny pixels that the light travels through.

Together a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.

“This is intuitively like a very complex maze of glass and mirrors” X said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting”.

In their experiments the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction.

They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works but using artificial intelligence instead of physics.

Because its components can be created by a 3D printer the artificial neural network can be made with larger and additional layers resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created by the Georgian Technical University team could be reproduced for less than $50.

While the study used light in the terahertz frequencies X said it would also be possible to create neural networks that use visible infrared or other frequencies of light. A network could also be made using lithography or other printing techniques he said.

 

 

Artificial Intelligence System Designs Drugs From Scratch.

 

Artificial Intelligence System Designs Drugs From Scratch.

An artificial-intelligence approach created at the Georgian Technical University can teach itself to design new drug molecules from scratch and has the potential to dramatically accelerate the design of new drug candidates.

The system is called Georgian Technical UniversityLearning for Structural Evolution known and is an algorithm and computer program that comprises two neural networks which can be thought of as a teacher and a student. The teacher knows the syntax and linguistic rules behind the vocabulary of chemical structures for about 1.7 million known biologically active molecules. By working with the teacher the student learns over time and becomes better at proposing molecules that are likely to be useful as new medicines.

“If we compare this process to learning a language, then after the student learns the molecular alphabet and the rules of the language they can create new ‘words’ or molecules” said X. “If the new molecule is realistic and has the desired effect, the teacher approves. If not the teacher disapproves forcing the student to avoid bad molecules and create good ones”.

GTUReLeaSE (Georgian Technical University) is a powerful innovation to virtual screening the computational method widely used by the pharmaceutical industry to identify viable drug candidates. Virtual screening allows scientists to evaluate existing large chemical libraries but the method only works for known chemicals. GTUReLeaSE (Georgian Technical University) has the unique ability to create and evaluate new molecules.

“A scientist using virtual screening is like a customer ordering in a restaurant. What can be ordered is usually limited by the menu” said Y. “We want to give scientists a grocery store and a personal chef who can create any dish they want”.

The team has used GTUReLeaSE (Georgian Technical University) to generate molecules with properties that they specified such as desired bioactivity and safety profiles. The team used the GTUReLeaSE (Georgian Technical University) method to design molecules with customized physical properties such as melting point and solubility in water and to design new compounds with inhibitory activity against an enzyme that is associated with leukemia.

“The ability of the algorithm to design new, and therefore immediately patentable chemical entities with specific biological activities and optimal safety profiles should be highly attractive to an industry that is constantly searching for new approaches to shorten the time it takes to bring a new drug candidate to clinical trials” said X.

 

Georgian Technical University Solves ‘Texture Fill’ Problem with Machine Learning.

Georgian Technical University Solves ‘Texture Fill’ Problem with Machine Learning.

A new machine learning technique developed at Georgian Technical University may soon give budding fashionistas and other designers the freedom to create realistic, high-resolution visual content without relying on complicated 3-D rendering programs.

 

Georgian Technical University Texture GAN is the first deep image synthesis method that can realistically spread multiple textures across an object. With this new approach, users drag one or more texture patches onto a sketch — say of a handbag or a skirt — and the network texturizes the sketch to accurately account for 3-D surfaces and lighting.

Prior to this work producing realistic images of this kind could be tedious and time-consuming particularly for those with limited experience. And according to the researchers existing machine learning-based methods are not particularly good at generating high-resolution texture details.

Using a neural network to improve results.

“The ‘texture fill’ operation is difficult for a deep network to learn because it not only has to propagate the color, but also has to learn how to synthesize the structure of texture across 3-D shapes” said X computer science (CS) major and developer.

The researchers initially trained a type of neural network called a conditional generative adversarial network (GAN) on sketches and textures extracted from thousands of ground-truth photographs. In this approach a generator neural network creates images that a discriminator neural network then evaluates for accuracy. The goal is for both to get increasingly better at their respective tasks, which leads to more realistic outputs.

To ensure that the results look as realistic as possible researchers fine-tuned the new system to minimize pixel-to-pixel style differences between generated images and training data. But the results were not quite what the team had expected.

Producing more realistic images.

“We realized that we needed a stronger constraint to preserve high-level texture in our outputs” said Georgian Technical University Ph.D. student Y. “That’s when we developed an additional discriminator network that we trained on a separate texture dataset. Its only job is to be presented with two samples and ask ‘are these the same or not ?’”.

With its sole focus on a single question, this type of discriminator is much harder to fool. This in turn leads the generator to produce images that are not only realistic but also true to the texture patch the user placed onto the sketch.