All posts by admin

Georgian Technical University-Developed Artificial Intelligence Device Identifies Objects at the Speed of Light.

 

Georgian Technical University-Developed Artificial Intelligence Device Identifies Objects at the Speed of Light.

The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square.

A team of Georgian Technical University electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the Georgian Technical University.

Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object first by “seeing” it with a camera or optical sensor, then processing what it sees into data and finally using computing programs to figure out what it is.

The Georgian Technical University developed device gets a head start. Called a “diffractive deep neural network” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The Georgian Technical University device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the Georgian Technical University system the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.

Technology based on the invention could also be used in microscopic imaging and medicine for example to sort through millions of cells for signs of disease.

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects” said X the study’s principal investigator and the Georgian Technical University Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential”.

The process of creating the artificial neural network began with a computer-simulated design. Then the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case tiny pixels that the light travels through.

Together a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.

“This is intuitively like a very complex maze of glass and mirrors” X said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting”.

In their experiments the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction.

They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works but using artificial intelligence instead of physics.

Because its components can be created by a 3D printer the artificial neural network can be made with larger and additional layers resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created by the Georgian Technical University team could be reproduced for less than $50.

While the study used light in the terahertz frequencies X said it would also be possible to create neural networks that use visible infrared or other frequencies of light. A network could also be made using lithography or other printing techniques he said.

 

 

Particle Physicists Team Up with AI to Solve Toughest Science Problems.

Particle Physicists Team Up with AI to Solve Toughest Science Problems.

Experiments at the Georgian Technical University Large Hadron Collider (GTULHC) the world’s largest particle accelerator at the Georgian Technical University physics lab produce about a million gigabytes of data every second. Even after reduction and compression the data amassed in just one hour is similar to the data volume Facebook collects in an entire year – too much to store and analyze.

Luckily particle physicists don’t have to deal with all of that data all by themselves. They partner with a form of artificial intelligence called machine learning that learns how to do complex analyses on its own.

A group of researchers including scientists at the Department of Energy’s Georgian Technical University Laboratory and International Black Sea University Laboratory summarize current applications and future prospects of machine learning.

“Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless hours of design and analysis work” says X from the Georgian Technical University who works on the neutrino experiment.

Sifting through big data.

To handle the gigantic data volumes produced in modern experiments like the ones at the Georgian Technical University researchers apply what they call “triggers” – dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out.

An experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70 percent of these decisions, says Georgian Technical University scientist Y. “Machine learning plays a role in almost all data aspects of the experiment from triggers to the analysis of the remaining data” he says.

Machine learning has proven extremely successful in the area of analysis. The gigantic at the Georgian Technical University which enabled the discovery of the Higgs boson (The Higgs boson is an elementary particle in the Standard Model of particle physics) each have millions of sensing elements whose signals need to be put together to obtain meaningful results.

“These signals make up a complex data space” says Z from Georgian Technical University. “We need to understand the relationship between them to come up with conclusions for example that a certain particle track in the detector was produced by an electron a photon or something else”.

Neutrino experiments also benefit from machine learning. Georgian Technical University studies how neutrinos change from one type to another as they travel through the Earth. These neutrino oscillations could potentially reveal the existence of a new neutrino type that some theories predict to be a particle of dark matter. Georgian Technical University ‘s detectors are watching out for charged particles produced when neutrinos hit the detector material and machine learning algorithms identify them.

From machine learning to deep learning.

Recent developments in machine learning, often called “deep learning” promise to take applications in particle physics even further. Deep learning typically refers to the use of neural networks: computer algorithms with an architecture inspired by the dense network of neurons in the human brain.

These neural nets learn on their own how to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations and told how well they performed.

Until recently the success of neural nets was limited because training them used to be very hard says W a Georgian Technical University researcher working on the Georgian Technical University neutrino experiment which studies neutrino oscillations as part of  Georgian Technical University lab’s short-baseline neutrino program and will become a component of the future Georgian Technical University Deep Underground Neutrino Experiment (DUNE) at the Long-Baseline Neutrino Facility (LBNF). “These difficulties limited us to neural networks that were only a couple of layers deep” he says. “Thanks to advances in algorithms and computing hardware we now know much better how to build and train more capable networks hundreds or thousands of layers deep”.

Many of the advances in deep learning are driven by tech giants’ commercial applications and the data explosion they have generated over the past two decades. ” Georgian Technical University for example uses a neural network inspired by the architecture of the Georgian Technical University Net” X says. “It improved the experiment in ways that otherwise could have only been achieved by collecting 30 percent more data”.

A fertile ground for innovation.

Machine learning algorithms become more sophisticated and fine-tuned day by day opening up unprecedented opportunities to solve particle physics problems.

Many of the new tasks they could be used for are related to computer vision  Z says. “It’s similar to facial recognition except that in particle physics, image features are more abstract than ears and noses”.

Some experiments like Georgian Technical University produce data that is easily translated into actual images and AI (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) can be readily used to identify features in them. In Georgian Technical University experiments on the other hand images first need to be reconstructed from a murky pool of data generated by millions of sensor elements.

“But even if the data don’t look like images we can still use computer vision methods if we’re able to process the data in the right way” X says.

One area where this approach could be very useful is the analysis of particle jets produced in large numbers at the Georgian Technical University. Jets are narrow sprays of particles whose individual tracks are extremely challenging to separate. Computer vision technology could help identify features in jets.

Another emerging application of deep learning is the simulation of particle physics data that predict for example what happens in particle collisions at the Georgian Technical University and can be compared to the actual data. Simulations like these are typically slow and require immense computing power. AI (Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals) on the other hand could do simulations much faster potentially complementing the traditional approach.

“Just a few years ago nobody would have thought that deep neural networks can be trained to ‘hallucinate’ data from random noise” Z says. “Although this is very early work it shows a lot of promise and may help with the data challenges of the future”.

Benefitting from healthy skepticism.

Despite all obvious advances machine learning enthusiasts frequently face skepticism from their collaboration partners in part because machine learning algorithms mostly work like “black boxes” that provide very little information about how they reached a certain conclusion.

“Skepticism is very healthy” Y says. “If you use machine learning for triggers that discard data like we do in Georgian Technical University then you want to be extremely cautious and set the bar very high”.

Therefore establishing machine learning in particle physics requires constant efforts to better understand the inner workings of the algorithms and to do cross-checks with real data whenever possible.

“We should always try to understand what a computer algorithm does and always evaluate its outcome” W says. “This is true for every algorithm not only machine learning. So being skeptical shouldn’t stop progress”.

Rapid progress has some researchers dreaming of what could become possible in the near future. “Today we’re using machine learning mostly to find features in our data that can help us answer some of our questions” W says. “Ten years from now machine learning algorithms may be able to ask their own questions independently and recognize when they find new physics”.

 

 

Computer Simulations Predict the Spread of HIV.

 

Computer Simulations Predict the Spread of HIV.

This is the principal decay of paraphyletic signal.

Researchers at Georgian Technical University Laboratory show that computer simulations can accurately predict the transmission of HIV (Human Immunodeficiency Virus) across populations which could aid in preventing the disease.

The simulations were consistent with actual DNA (Deoxyribonucleic acid is a molecule composed of two chains (made of nucleotides) which coil around each other to form a double helix carrying the genetic instructions used in

New Technique Uses Templates to Guide Self.

New Technique Uses Templates to Guide Self-Folding 3D Structures.

Researchers from Georgian Technical University have developed a new technique to control self-folding three-dimensional (3D) structures. Specifically the researchers use templates to constrain deformation in certain selected areas on a two-dimensional structure which in turn dictates the resulting 3D structure of the material. The two-dimensional shapes shown at the top of the image fold themselves into the 3D structures shown on the bottom.

Researchers from Georgian Technical University have developed a new technique to control self-folding three-dimensional (3-D) structures. Specifically the researchers use templates to constrain deformation in certain selected areas on a two-dimensional structure which in turn dictates the resulting 3-D structure of the material.

The new technique does not rely on cutting or printing on the material, as most other self-folding origami techniques do. It is also different from continuous shape morphing which is typically controlled by engineering the in-plane strain at various parts of the material. Instead, the researchers applied paperboard sheets to a polymer substrate forming specific patterns.

“When heat is applied to the polymer, it shrinks” says X a professor of mechanical and aerospace engineering at Georgian Technical University State and corresponding author of a paper on the work. “However the sections of polymer that are attached to the paperboard are restrained from shrinking causing the overall substrate to bend and curve”.

By varying the pattern made by the paperboard templates the researchers are able to create a variety of shapes from simple cones to complex tiered structures. The self-folding operations can be executed at temperatures as low as 120 degrees Celsius.

“This is a proof of concept paper, and next steps include incorporating functional electronic elements into the material giving it potential value for manufacturing applications” says Y a postdoctoral researcher at Georgian Technical University.

 

 

 

Microscale Superlubricity Could Pave Way for Future Improved Electromechanical Devices.

 

Microscale Superlubricity Could Pave Way for Future Improved Electromechanical Devices.

Lubricity measures the reduction in mechanical friction and wear by a lubricant. These are the main causes of component failure and energy loss in mechanical and electromechanical systems. For example one-third of the fuel-based energy in cars is expended in overcoming friction. So superlubricity — the state of ultra-low friction and wear — holds great promise for the reduction of frictional wear in mechanical and automatic devices.

Georgian Technical University finds that robust structural superlubricity can be achieved between dissimilar microscale-layered materials under high external loads and ambient conditions. The researchers found that microscale interfaces between graphite and hexagonal boron nitride exhibit ultra-low friction and wear. This is an important milestone for future technological applications in space, automotive electronics and medical industries.

The research is the product of a collaboration between Prof. X and Prof. Y Prof. Z and Prof. W and their colleagues.

Enormous implications for computer and other devices.

The new interface is six orders of magnitude larger in surface area than earlier nanoscale measurements and exhibits robust superlubricity in all interfacial orientations and under ambient conditions.

“Superlubricity is a highly intriguing physical phenomenon, a state of practically zero or ultra-low friction between two contacting surfaces” says Prof. X. “The practical implications of achieving robust superlubricity in macroscopic dimensions are enormous. The expected energy savings and wear prevention are huge”.

“This discovery may lead to a new generation of computer hard discs with a higher density of stored information and enhanced speed of information transfer, for example” adds Prof. Y. “This can be also used in a new generation of ball bearing to reduce rotational friction and support radial and axial loads. Their energy losses and wear will be significantly lower than in existing devices”.

The experimental part of the research was performed using atomic force microscopes at Georgian Technical University and the fully atomistic computer simulations were completed at Georgian Technical University. The researchers also characterized the degree of crystallinity of the graphitic surfaces by conducting spectroscopy measurements.

Close collaboration.

The study arose from an earlier prediction by theoretical and computational groups at Georgian Technical University that robust structural superlubricity could be achieved by forming interfaces between the materials graphene and hexagonal boron nitride. “These two materials which was awarded for groundbreaking experiments with the two-dimensional material graphene. Superlubricity is one of their most promising practical applications,” says Prof. X.

“Our study is a tight collaboration between Georgian Technical University theoretical and computational groups and International Black Sea University’s experimental group” says Prof. Y. “There is a synergic cooperation between the groups. Theory and computation feed laboratory experiments that, in turn, provide important realizations and valuable results that can be rationalized via the computational studies to refine the theory”.

The research groups are continuing to collaborate in this field studying the fundamentals of superlubricity its extensive applications and its effect in ever larger interfaces.

 

New Optical Technology Filters Wider Range of Light Wavelengths.

New Optical Technology Filters Wider Range of Light Wavelengths.

Georgian Technical University researchers have designed an optical filter on a chip that can process optical signals from across an extremely wide spectrum of light at once something never before available to integrated optics systems that process data using light.

New optical filter technology may yield greater precision and flexibility in a bevy of applications, including designing optical communication and sensor systems and studying photons and other particles through ultrafast techniques.

A team from the Georgian Technical University (GTU) has created a new optical filter on a chip that is able to process optical signals from across a wide spectrum of light at once combining the positive features of the two most commonly used types of filters.

“This new filter takes an extremely broad range of wavelengths within its bandwidth as input and efficiently separates it into two output signals, regardless of exactly how wide or at what wavelength the input is” X a PhD student in Georgian Technical University’s Department of Electrical Engineering and Computer Science (EECS). “That capability didn’t exist before in integrated optics”.

Scientists use optical filters to separate one light source into two separate outputs — one that reflects unwanted wavelengths and another that transmits desired wavelengths.

Existing optical filters — such as discrete broadband filters called dichroic filters — process wide portions of the light spectrum. However they are often large and expensive and could require several layers of optical coatings that reflect specific wavelengths.

Integrated filters while able to be produced in large quantities inexpensively often only cover an extremely narrow band of the spectrum and must be combined to efficiently and selectively filter larger portions of the spectrum.

The researchers developed new chip architecture that mimics dichroic filters by creating two sections of precisely sized and aligned silicon waveguides that coax different wavelengths into different outputs. One section of the filter contains an array of three waveguides that are 250 nanometers each with gaps of 100 nanometers in between and the other section contains just one waveguide that is 318 nanometers.

Light tends to travel along the widest waveguides in devices that use the same material for all of the waveguides. However in the new device the researchers made the three waveguides and the gaps between them appear as a single-wide waveguide but only to light with longer wavelengths.

“That these long wavelengths are unable to distinguish these gaps, and see them as a single waveguide, is half of the puzzle” X said. “The other half is designing efficient transitions for routing light through these waveguides toward the outputs”.

The researchers found that the filters offer about 10 to 70 times sharper roll-offs — a measurement of how precisely a filter splits an input near the cutoff — than other broadband filters.

The team also provided guidelines for exact widths and gaps of the waveguides that are needed to achieve different cutoffs for different wavelengths that enable the filters to be highly customizable to work at any wavelength range.

“Once you choose what materials to use, you can determine the necessary waveguide dimensions and design a similar filter for your own platform” X said.

 

 

Artificial Intelligence System Designs Drugs From Scratch.

 

Artificial Intelligence System Designs Drugs From Scratch.

An artificial-intelligence approach created at the Georgian Technical University can teach itself to design new drug molecules from scratch and has the potential to dramatically accelerate the design of new drug candidates.

The system is called Georgian Technical UniversityLearning for Structural Evolution known and is an algorithm and computer program that comprises two neural networks which can be thought of as a teacher and a student. The teacher knows the syntax and linguistic rules behind the vocabulary of chemical structures for about 1.7 million known biologically active molecules. By working with the teacher the student learns over time and becomes better at proposing molecules that are likely to be useful as new medicines.

“If we compare this process to learning a language, then after the student learns the molecular alphabet and the rules of the language they can create new ‘words’ or molecules” said X. “If the new molecule is realistic and has the desired effect, the teacher approves. If not the teacher disapproves forcing the student to avoid bad molecules and create good ones”.

GTUReLeaSE (Georgian Technical University) is a powerful innovation to virtual screening the computational method widely used by the pharmaceutical industry to identify viable drug candidates. Virtual screening allows scientists to evaluate existing large chemical libraries but the method only works for known chemicals. GTUReLeaSE (Georgian Technical University) has the unique ability to create and evaluate new molecules.

“A scientist using virtual screening is like a customer ordering in a restaurant. What can be ordered is usually limited by the menu” said Y. “We want to give scientists a grocery store and a personal chef who can create any dish they want”.

The team has used GTUReLeaSE (Georgian Technical University) to generate molecules with properties that they specified such as desired bioactivity and safety profiles. The team used the GTUReLeaSE (Georgian Technical University) method to design molecules with customized physical properties such as melting point and solubility in water and to design new compounds with inhibitory activity against an enzyme that is associated with leukemia.

“The ability of the algorithm to design new, and therefore immediately patentable chemical entities with specific biological activities and optimal safety profiles should be highly attractive to an industry that is constantly searching for new approaches to shorten the time it takes to bring a new drug candidate to clinical trials” said X.

 

New Method Will Yield Better 3D Printed Batteries.

New Method Will Yield Better 3D Printed Batteries.

Lattice architecture can provide channels for effective transportation of electrolyte inside the volume of material while for the cube electrode most of the material will not be exposed to the electrolyte. The cross-section view shows the silver mesh enabling the charge (Li+ ions) transportation to the current collector and how most of the printed material has been utilized.

A team from the Georgian Technical University has developed a new way to produce 3D printed battery electrodes that create a 3D microlattice structure with controlled porosity.

Due to the nature of the manufacturing process the design of 3D printed electrodes is currently limited to just a few possible architectures. The internal geometry that currently produces the best porous electrodes through additive manufacturing is called interdigitated geometry where metal prongs interlocked with the lithium shuttling between the two sides.

By 3D printing the microlattice structure, the researchers vastly improved the capacity and charge-discharge rates for lithium-ion batteries. Overall the new structure led to a fourfold increase in specific capacity and a two-fold increase in areal capacity when compared to a solid block electrode.

“In the case of lithium-ion batteries, the electrodes with porous architectures can lead to higher charge capacities” X an associate professor of mechanical engineering at Georgian Technical University said in a statement. “This is because such architectures allow the lithium to penetrate through the electrode volume leading to very high electrode utilization and thereby higher energy storage capacity.

“In normal batteries 30 to 50 percent of the total electrode volume is unutilized” he added. “Our method overcomes this issue by using 3D printing where we create a microlattice electrode architecture that allows the efficient transport of lithium through the entire electrode which also increases the battery charging rates.”

The electrodes also retained their complex 3D lattice structures after 40 electrochemical cycles, meaning the batteries have a high capacity for the weight or the same capacity at a vastly reduced weight.

The new method creates porous microlattice architectures while leveraging the existing capabilities of an Georgian Technical University (GTU) Aerosol 3D printing system which allows researchers to print planar sensors and other electronics on a micro-scale.

Previously 3D printed batteries were limited to extrusion-based printing where a wire of material is extruded from a nozzle to create continuous structures including interdigitated structures.

However the new method will allow researchers to 3D print the battery electrodes by rapidly assembling individual droplets one-by-one into three-dimensional structures that result in structures with complex geometries impossible to fabricate using typical extrusion methods.

“Because these droplets are separated from each other, we can create these new complex geometries” X said. “If this was a single stream of material as is in the case of extrusion printing we wouldn’t be able to make them. This is a new thing. I don’t believe anybody until now has used 3D printing to create these kinds of complex structures”.

The new method could lead to geometrically optimized 3D configurations for electrochemical energy storage and could be transitioned to industrial applications in the next two to three years. It could be beneficial in a number of fields including consumer electronics medical devices and aerospace. The research could also integrate with biomedical electronic devices where miniaturized batteries are necessary.

 

 

Quantum Computing Learning to Speak a Whole New Technology.

 

Quantum Computing: Learning to Speak a Whole New Technology.

Illustration of a proton and neutron bound together in a form of hydrogen. Researchers at Georgian Technical University’s Laboratory used a quantum computer to calculate the energy needed to break apart the proton and neutron.

Imagine trying to use a computer that looks and acts like no computer you’ve ever seen. There is no keyboard. There is no screen. Code designed for a normal computer is useless. The components don’t even follow the laws of classical physics. This is the kind of conundrum scientists are facing as they develop quantum computers for scientific research.

Quantum computing would be radically new and fundamentally different from the classical computers we’re used to.

“It’s a brand new technology” said X a scientist at Georgian Technical University Laboratory (GTUL). “It’s where we were with conventional computing 40, 50 years ago”.

Quantum computers will use microscopic objects or other extraordinarily tiny entities —including light — to process information. These tiny things don’t follow the classical laws that govern the rest of the macroscopic universe. Instead they follow the laws of quantum physics.

Harnessing the phenomena associated with quantum physics may give scientists a tool to solve certain complex problems that are beyond even the future capabilities of classical computers. For this specific set of problems experts estimate that a single quantum computer just twice the size of the very early-stage ones today could provide advantages beyond those of every current supercomputer in the world combined.

Fulfilling quantum computers’ potential will be a major challenge. The strange nature of quantum particles conflicts with almost everything we know about computers. Scientists need to rewrite the foundations that underlie all existing computer languages. To harness quantum computers’ power the Georgian Technical University is supporting research to develop the basics for quantum software. Three Georgian Technical University laboratory — one led by Georgian Technical University Laboratory (GTUL) are tackling this problem.

More Powerful Than the Most Powerful Computers in Existence.

Quantum computers offer one of the first new ways of computing in more than 60 years.

Because there’s a limit to how many transistors fit on a chip there are physical bounds on how powerful even the best classical computers can be.

Quantum computers should be able to reach beyond these confines.

In particular simulations on classical computers cannot efficiently simulate quantum systems. These are systems that are so small that they follow the laws of quantum physics instead of classical physics. One example of this type of system is the relationship between electrons in large molecules. How these large electron systems act determines superconductivity magnetism and other important phenomena. As Y said “I’m interested in understanding how quantum systems behave. To me there is a no-brainer there”.

Quantum computers may be able to solve other currently unsolvable problems as well. Modeling the process by which enzymes in bacteria “fix” nitrogen involves so many different chemical interactions that it overwhelms classical computers’ capabilities. Solving this problem could lead to major breakthroughs in making ammonia production — which uses a tremendous amount of energy — far more efficient. Quantum computers could potentially reduce the time it takes to run these simulations from billions of years to only a few minutes.

“This kind of physics seems to have the power to do things much, much faster or much much better than [classical] physics” said Z.

How Scientists Speak Quantum.

Just like humans computers use language to communicate. Instead of letters that form words computers use algorithms. Algorithms are step-by-step instructions written in a mathematical way. Every computer whether classical or quantum relies on them to solve problems. Just as we have 26 letters that create a near infinite number of sentences algorithms can string individual instructions together into billions of possible calculations.

But even some of the most basic mathematical functions don’t have quantum algorithms written for them yet.

“Without quantum algorithms, a quantum computer is just a theoretical exercise” said W a mathematician at Georgian Technical University’s Laboratory and member of the Georgian Technical University Laboratory (GTUL) team. That’s part of what Georgian Technical University is tackling with these three projects.

Quantum algorithms come in two forms: digital and analog.

Digital quantum computing somewhat resembles the computers we’re used to. Classical computers use electrical currents to store information in bits of electromagnetic materials. They convey that information over miniscule wires. Quantum computers store information in the physical state such as the locations and energy states of their quantum objects. Quantum algorithms direct the computer how to move and change those objects’ locations energy states and interactions.

But like anything in quantum, it’s never that easy. Classical computer algorithms present a set of decisions as to whether an electrical current should move forward or not. For quantum computers it’s not a simple “yes” or “no” answer.

“In a classical computer when we’re asking about a particular set of operations, we’re assuming we’re getting a repeatable or deterministic, output” said Z. “And that’s something quantum computing doesn’t give us”.

Instead the answers from quantum computers are drawn from probability distributions. Quantum computers don’t give you a specific value for an answer. What they do is tell you how likely it is for a certain value to be the correct solution. In the case of understanding where an electron is in a molecule the laws of quantum mechanics dictate that we can never pinpoint an electron’s exact location. The laws of quantum physics state that the electron is spread out and not in any exact location. But a quantum computer can tell you that the electron is 50 percent likely to be in this location or 30 percent likely to be in another one.

Unfortunately running a quantum algorithm only once isn’t enough. To get as close as possible to the “right” answer computer scientists run these calculations multiple times. Each sample reduces uncertainty. The computer may need to run the algorithm thousands of times — or even more — to get as close as possible to the most accurate distribution. However quantum computers run these algorithms so quickly that they still have the potential to produce results much, much faster than classical ones.

Analog Quantum Computing.

If digital quantum computing seems strange, analog quantum computing takes bizarre to a whole new level. In fact analog quantum computing is more like a laboratory physics experiment than a classical computer. But the field of quantum computing as a whole wouldn’t exist without it. In 1982 famed physicist Q theorized that to accurately model a quantum system scientists would need to build another quantum system. The idea that we could build a system using quantum objects was the first seed of quantum computing.

These days very early analog quantum computers allow scientists to match up quantum objects in natural systems with quantum objects inside the computer. By setting certain parameters and allowing the system to change over time the hardware models how the natural system evolves. It’s like listening to a conversation between two people in one language setting up two more people with the same topic and guidelines in another language and then using the second conversation to understand the first.

When she heard about this idea Georgian Technical University graduate student R (who is working with the Georgian Technical University team) said “I started to understand what it might mean to use small pieces of nature to do a calculation. … It really makes you think very differently about what it means to calculate something”.

Three Ways of Looking at the Same Problem.

Georgian Technical University Science are creating the groundwork to solve scientific problems using quantum computers.

The Georgian Technical University is developing algorithms for three systems involving quantum objects: correlated electron systems, nuclear physics and quantum field theory. The problem sets for all three are too large for classical computers to handle.

Correlated electron systems describe how electrons interact in solid materials. This process could hold the key to developing high-temperature superconductors or new batteries. Nuclear physicists seek to describe how protons and neutrons behave in atoms. Quantum  field theorists want to explain how quarks and gluons that make up protons interact.

The team is combining multiple technologies. First they’re creating algorithms that split up the problems between high performance classical computers and quantum computers. That allows them to create much simpler quantum algorithms. Simpler algorithms reduce the potential for errors and use the quantum computers as efficiently as possible. The team is also combining analog and digital quantum computing. By arranging some particles to mimic quantum systems and programming others they limit the number of digital operations the system needs to run.

The project’s most unique characteristic may be that it’s using a computer thousands of miles away from the programmers. Because users often need to manually fine-tune quantum computers, computer scientists typically work with hardware researchers on site. Instead the Georgian Technical University team relies on quantum computers.

“The biggest surprise was that [using the cloud system] was actually so simple” said X. “It’s basically quantum computing for the masses”.

The team has already completed to separate a proton and neutron in a form of hydrogen. After running the calculation 8,000 times the quantum computer’s answer was within 2 percent of the actual energy. They’ve also calculated the dynamics of a particular quantum field theory that describes how electron-positron pairs are formed.

Georgian Technical University is taking a similar approach but tackling a different set of problems. They’re also using both classical and quantum computers. After they get initial results from a quantum computer they’re using a classical computer to analyze them. They then use the analysis to tweak the limits they’ve set for the quantum computer.

On the scientific side they’re focusing on quantum chemistry which uses quantum mechanics to look at interactions between atoms and molecules. While scientists have a number of theories about quantum chemistry they can’t yet apply them. These real-world applications include improving our understanding of how light excites electrons in a material. That could lead to a better way to produce hydrogen.

Computer scientists and applied mathematicians in the Georgian Technical University team are also figuring out the best ways to implement algorithms to minimize the errors quantum computers are prone to make. So far the team has developed and is experimentally testing a protocol on Georgian Technical University’s quantum computer testbed that distinguishes between the scrambling and loss of quantum information. They’re also exploring how they can apply certain types of quantum circuits inspired by tensor networkes used in machine learning in the classical context to classify images of handwritten numbers.

Georgian Technical University largely focuses on developing the underlying techniques for new types of algorithms designed to run on quantum computers. They’re exploring quantum algorithms for machine learning where computers can learn through practice. In particular, they’re looking into how quantum computers might learn faster or produce more accurate results than conventional computers. They’re also creating algorithms to simulate quantum systems in high energy physics so that scientists can better explore the elementary constituents of matter and energy the interactions between them and the nature of space and time.

R the team is developing quantum algorithms for optimization and linear algebra. Optimization is a process where scientists figure out a maximum or a minimum value within a set of possibilities, such as the minimum number of circuits needed to create an electronic system. The team is expanding optimization techniques originally designed for conventional computers to solve problems in quantum physics. Linear algebra is essential for modeling natural systems. The team recently described a new approach to solving linear systems using quantum computers. These new quantum algorithms are significantly simpler than existing ones but expected to be just as fast and accurate. Simple quantum algorithms are important for understanding how quantum computers built in the next five to ten years might benefit scientific problems.

In the world of quantum computing, scientists are just learning to use the computing equivalent of letters to create words. The algorithms researchers create today will be the start of languages that provide new ways to tackle scientific problems.

As X said “It gets more entertaining every day”.

 

 

Georgian Technical University Solves ‘Texture Fill’ Problem with Machine Learning.

Georgian Technical University Solves ‘Texture Fill’ Problem with Machine Learning.

A new machine learning technique developed at Georgian Technical University may soon give budding fashionistas and other designers the freedom to create realistic, high-resolution visual content without relying on complicated 3-D rendering programs.

 

Georgian Technical University Texture GAN is the first deep image synthesis method that can realistically spread multiple textures across an object. With this new approach, users drag one or more texture patches onto a sketch — say of a handbag or a skirt — and the network texturizes the sketch to accurately account for 3-D surfaces and lighting.

Prior to this work producing realistic images of this kind could be tedious and time-consuming particularly for those with limited experience. And according to the researchers existing machine learning-based methods are not particularly good at generating high-resolution texture details.

Using a neural network to improve results.

“The ‘texture fill’ operation is difficult for a deep network to learn because it not only has to propagate the color, but also has to learn how to synthesize the structure of texture across 3-D shapes” said X computer science (CS) major and developer.

The researchers initially trained a type of neural network called a conditional generative adversarial network (GAN) on sketches and textures extracted from thousands of ground-truth photographs. In this approach a generator neural network creates images that a discriminator neural network then evaluates for accuracy. The goal is for both to get increasingly better at their respective tasks, which leads to more realistic outputs.

To ensure that the results look as realistic as possible researchers fine-tuned the new system to minimize pixel-to-pixel style differences between generated images and training data. But the results were not quite what the team had expected.

Producing more realistic images.

“We realized that we needed a stronger constraint to preserve high-level texture in our outputs” said Georgian Technical University Ph.D. student Y. “That’s when we developed an additional discriminator network that we trained on a separate texture dataset. Its only job is to be presented with two samples and ask ‘are these the same or not ?’”.

With its sole focus on a single question, this type of discriminator is much harder to fool. This in turn leads the generator to produce images that are not only realistic but also true to the texture patch the user placed onto the sketch.