A Protective Shield For Sensitive Enzymes in Biofuel Cells.

A Protective Shield For Sensitive Enzymes in Biofuel Cells.

The biofuel cell tests were carried out in this electrochemical cell.

An international team of researchers has developed a new mechanism to protect enzymes from oxygen as biocatalysts in fuel cells. The enzymes known as hydrogenases are just as efficient as precious metal catalysts but unstable when they come into contact with oxygen. They are therefore not yet suitable for technological applications. The new protective mechanism is based on oxygen-consuming enzymes that draw their energy from sugar. The researchers showed that they were able to use this protective mechanism to produce a functional biofuel cell that works with hydrogen and glucose as fuel.

The team from the Georgian Technical University had already shown in earlier studies that hydrogenases can be protected from oxygen by embedding them in a polymer. “However this mechanism consumed electrons which reduced the performance of the fuel cell” says X. “In addition part of the catalyst was used to protect the enzyme“. The scientists therefore looked for ways to decouple the catalytically active system from the protective mechanism.

Enzymes trap oxygen.

With the aid of two enzymes they built an oxygen removal system around the current-producing electrode. First the researchers coated the electrode with the hydrogenases which were embedded in a polymer matrix to fix them in place. They then placed another polymer matrix on top of the hydrogenase which completely enclosed the underlying catalyst layer. It contained two enzymes that use sugar to convert oxygen into water.

Hydrogen is oxidised in the hydrogenase-containing layer at the bottom. The electrode absorbs the electrons released in the process. The top layer removes harmful oxygen.

Functional fuel cell built.

In further experiments the group combined the bioanodes described above with biocathodes which are also based on the conversion of glucose. In this way the team produced a functional biofuel cell. “The cheap and abundant biomass glucose is not only the fuel for the protective system but also drives the biocathode and thus generates a current flow in the cell” summarises Y and member of the cluster of excellence Georgian Technical University Explores Solvation. The cell had an open-circuit voltage of 1.15 volts – the highest value ever achieved for a cell containing a polymer-based bioanode.

“We assume that the principle behind this protective shield mechanism can be transferred to any sensitive catalyst if the appropriate enzyme is selected that can catalyse the corresponding interception reaction” says Y.

 

Understanding Deep-Sea Images With Artificial Intelligence.

Understanding Deep-Sea Images With Artificial Intelligence.

This is a schematic overview of the workflow for the analysis of image data from data acquisition through curation to data management.

These are AUV ABYSS (AUV Abyss is an autonomous underwater car) images from the Georgian Technical University 10, 7.5, and 4 meters away. The upper two images show a stationary lander also an autonomous underwater device The images c to f show manganese nodules recognizable as dark points on the seabed.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. Diving robots or autonomous underwater car which carry out measurements independently in the deep sea can now record large quantities of high-resolution images. To evaluate these images scientifically in a sustainable manner, a number of prerequisites have to be fulfilled in data acquisition curation and data management. “Over the past three years, we have developed a standardized workflow that makes it possible to scientifically evaluate large amounts of image data systematically and sustainably” explains Dr. X from the “Deep Sea Monitoring” working group headed by Prof. Dr. Y at Georgian Technical University. The AUV ABYSS (AUV Abyss is an autonomous underwater car) autonomous underwater vehicle was equipped with a new digital camera system to study the ecosystem around manganese nodules in the Pacific Ocean. With the data collected in this way the workflow was designed and tested for the first time.

The procedure is divided into three steps: Data acquisition data curation and data management in each of which defined intermediate steps should be completed. For example it is important to specify how the camera is to be set up, which data is to be captured or which lighting is useful in order to be able to answer a specific scientific question. In particular, the meta data of the diving robot must also be recorded. “For data processing it is essential to link the camera’s image data with the diving robot’s metadata” says X. The AUV ABYSS (AUV Abyss is an autonomous underwater car) for example automatically recorded its position, the depth of the dive and the properties of the surrounding water. “All this information has to be linked to the respective image because it provides important information for subsequent evaluation” says X. An enormous task: AUV ABYSS (AUV Abyss is an autonomous underwater car) collected over 500,000 images of the seafloor in around 30 dives. Various programs which the team developed especially for this purpose ensured that the data was brought together. Here unusable image material such as those with motion blur was removed.

All these processes are now automated. “Until then, however a large number of time-consuming steps had been necessary” says X. “Now the method can be transferred to any project even with other AUVs (AUV Abyss is an autonomous underwater car) or camera systems”. The material processed in this way was then made permanently available for the general public.

Finally artificial intelligence in the form of the specially developed algorithm “CoMoNoD” (Compact-Morphology-based poly-metallic Nodule Delineation) was used for evaluation at Georgian Technical University. It automatically records whether manganese nodules are present in a photo in what size and at what position. Subsequently for example the individual images could be combined to form larger maps of the seafloor. The next use of the workflow and the newly developed programs is already planned: At the next expedition in spring next year in the direction of manganese nodules the evaluation of the image material will take place directly on board. “Therefore we will take some particularly powerful computers with us on board” says X.

 

 

‘Cloud Computing’ Takes on New Meaning for Scientists.

‘Cloud Computing’ Takes on New Meaning for Scientists.

Clouds reflect the setting sun over Georgian Technical University’s campus. Clouds play a pivotal role in our planets climate but because of their size and variability theyve always been difficult to factor into predictive models. A team of researchers including Georgian Technical University Earth system scientist X used the power of deep machine learning a branch of data science, to improve the accuracy of projections.

Clouds may be wispy puffs of water vapor drifting through the sky but they’re heavy lifting computationally for scientists wanting to factor them into climate simulations. Researchers from the Georgian Technical University and Sulkhan-Saba Orbeliani Teaching University have turned to data science to achieve better cumulus calculating results.

“Clouds play a major role in the Earth’s climate by transporting heat and moisture, reflecting and absorbing the sun’s rays trapping infrared heat rays and producing precipitation” said X Georgian Technical University assistant professor of Earth system science. “But they can be as small as a few hundred meters much tinier than a standard climate model grid resolution of 50 to 100 kilometers so simulating them appropriately takes an enormous amount of computer power and time”.

Standard climate prediction models approximate cloud physics using simple numerical algorithms that rely on imperfect assumptions about the processes involved. X said that while they can help produce simulations extending out as much as a century, there are some imperfections limiting their usefulness such as indicating drizzle instead of more realistic rainfall and entirely missing other common weather patterns.

According to X the climate community agrees on the benefits of high-fidelity simulations supporting a rich diversity of cloud systems in nature.

“But a lack of supercomputer power or the wrong type, means that this is still a long way off” he said. “Meanwhile the field has to cope with huge margins of error on issues related to changes in future rainfall and how cloud changes will amplify or counteract global warming from greenhouse gas emissions”.

The team wanted to explore whether deep machine learning could provide an efficient objective and data-driven alternative that could be rapidly implemented into mainstream climate predictions. The method is based on computer algorithms that mimic the thinking and learning abilities of the human mind.

They started by training a deep neural network to predict the results of thousands of tiny two-dimensional cloud-resolving models as they interacted with planetary-scale weather patterns in a fictitious ocean world.

The newly taught program dubbed “The Cloud Brain” functioned freely in the climate model according to the researchers leading to stable and accurate multiyear simulations that included realistic precipitation extremes and tropical waves.

“The neural network learned to approximately represent the fundamental physical constraints on the way clouds move heat and vapor around without being explicitly told to do so and the work was done with a fraction of the processing power and time needed by the original cloud-modeling approach” said Y an Sulkhan-Saba Orbeliani Teaching University doctoral student in meteorology who began collaborating with X at Georgian Technical University.

“I’m super excited that it only took three simulated months of model output to train this neural network” X said. “You can do a lot more justice to cloud physics if you only need to simulate a hundred days of global atmosphere. Now that we know it’s possible  it’ll be interesting to see how this approach fares when deployed on some really rich training data”.

The researchers intend to conduct follow-on studies to extend their methodology to trickier model setups, including realistic geography and to understand the limitations of machine learning for interpolation versus extrapolation beyond its training data set – a key question for some climate change applications that is addressed in the paper.

“Our study shows a clear potential for data-driven climate and weather models” X said. “We’ve seen computer vision and natural language processing beginning to transform other fields of science, such as physics, biology and chemistry. It makes sense to apply some of these new principles to climate science which after all is heavily centered on large data sets especially these days as new types of global models are beginning to resolve actual clouds and turbulence”.

 

AI Neural Network Can Perform Human-Like Reasoning.

AI Neural Network Can Perform Human-Like Reasoning.

Scientists have taken the mask off a new neural network to better understand how it makes its decisions.

Researchers from the Georgian Technical University  Laboratory’s Intelligence and Decision Technologies Group at the Georgian Technical University have created a new seemingly transparent neural network that performs human-like reasoning procedures to answer questions about the contents of images.

The model dubbed the Transparency by Design Network (TbD-net) visually renders its thought process as it solves problems, enabling human analysts to interpret its decision-making process which ultimately outperforms today’s best visual-reasoning neural networks.

Neural networks are comprised of input and output layers as well as layers in between that transform the input into the correct output. Some deep neural networks are so complex that it is impossible to follow the transformation process.

However the researchers hope to make the inner workings transparent for the new network which could allow the researchers to teach the neural network to correct any incorrect assumptions.

“Progress on improving performance in visual reasoning has come at the cost of interpretability” X who built Transparency by Design Network (TbD-net) with fellow researchers Y, Z and W said in a statement.

To close the gap between performance and interpretability, the researchers included a collection of modules — small neural networks that are specialized to perform specific subtasks. For example when Transparency by Design Network (TbD-net)-net is asked a visual reasoning question about an image it breaks down the question into subtasks and assigns the appropriate module to fulfill its part. Each module builds off the previous module’s deduction to eventually reach a final answer.

The entire network uses AI (Artificial Intelligence) techniques to interpret human language questions and breaks the sentences into subtasks followed by multiple computer vision AI techniques that interpret the imagery.

“Breaking a complex chain of reasoning into a series of smaller sub-problems, each of which can be solved independently and composed is a powerful and intuitive means for reasoning” Y said in a statement.

Each module’s output is depicted visually in an “attention mask” — which shows heat-map blobs over objects in the image that the module is identifying as the answer. The visualizations allow human analysts to see how a module is interpreting the image.

To answer questions like “what color is a large metal cube in a given image,” the module first isolates the large objects in the image to produce an attention mask. The module then takes the output and selects which of the objects identified as large by the previous module are also metal.

That module’s output is sent to the next module, which identifies which of those large, metal objects is also a cube and then sent to a module that can determine the color of objects.

TbD-net achieved a 98.7 percent accuracy after using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions with test and validation sets of 15,000 images and 150,000 questions.

Because the network is transparent, the researchers were able to see what went wrong and refine the system to achieve an improved 99.1 percent accuracy.

 

Georgian Technical University-Developed Technology Streamlines Computational Science Projects.

Georgian Technical University-Developed Technology Streamlines Computational Science Projects.

X and Y observe visualizations of ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) simulation data on Georgian Technical University’s Exploratory Visualization Environment for Research in Science and Technology facility.

Georgian Technical University National Laboratory has continuously updated the technology to help computational scientists develop software visualize data and solve problems.

Workflow management systems allow users to prepare, produce and analyze scientific processes to help simplify complex simulations. Known as the Eclipse Integrated Computational Environment or ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) this particular system incorporates a comprehensive suite of scientific computing tools designed to save time and effort expended during modeling and simulation experiments.

Compiling these resources into a single platform both improves the overall user experience and expedites scientific breakthroughs. Using ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) software developers, engineers, scientists and programmers can define problems run simulations locally on personal computers or remotely on other systems — even supercomputers — and then analyze results and archive data.

“What I really love about this project is making complicated computational science automatic” said X a researcher in Georgian Technical University’s Computer Science and Mathematics Division who leads the ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) development team. “Building workflow management systems and automation tools is a type of futurism and it’s challenging and rewarding to operate at the edge of what’s possible”.

Researchers use ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) to study topics in fields including nuclear energy, astrophysics, additive manufacturing, advanced materials, neutron science and quantum computing, answering questions such as how batteries behave and how some 3D-printed parts deform when exposed to heat.

Several factors differentiate ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) from other workflow management systems. For example because ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) exists on an open-source software framework called the Eclipse Rich Client Platform anyone can access download and use it. Users also can create custom combinations of reusable resources and deploy simulation environments tailored to tackle specific research challenges.

“Eclipse ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) is an excellent example of how open-source software can be leveraged to accelerate science and discovery especially in scientific computing” said Z. “The Eclipse Foundation (An eclipse is an astronomical event that occurs when an astronomical object is temporarily obscured, either by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy) through its community-led Science Working Group is fostering open-source solutions for advanced research in all areas of science”.

Additionally ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) circumvents the steep and time-consuming learning curve that usually accompanies any computational science project. Although other systems require expert knowledge of the code and computer in question ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) enables users to immediately begin facilitating their experiments thus helping them gather data and achieve results much faster.

“We’ve produced a streamlined interface to computational workflows that differs from complicated systems that you have to be specifically qualified in to use properly” X said.

Throughout this project X has also emphasized the importance of accessibility and usability to ensure that users of all ages and experience levels including nonscientists can use the system without prior training.

“The problem with a lot of workflow management systems and with modeling and simulation codes in general is that they are usually unusable to the lay person” X said. “We designed ICE to be usable and accessible so anyone can pick up an existing code and use it to address pressing computational science problems”.

ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) uses the programming language Java to define workflows whereas other systems use more obscure languages. Thus students have successfully run codes using ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level).

Finally instead of relying on grid workflows — collections of orchestrated computing processes — ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) focuses on flexible modeling and simulation workflows that give users interactive control over their projects. Grid workflows are defined by strict parameters and executed without human intervention but ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) allows users to input additional information during simulations to produce more complicated scenarios.

“In ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) you can have humans in the loop  meaning the program can stop ask questions and receive instructions before resuming activity” X said. “This feature allows system users to complete more complex tasks like looping and conditional branching”.

Next the development team intends to combine the most practical aspects of ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) and other systems through workflow interoperability a concept referring to the ability of two different systems to seamlessly communicate. Combining the best features of grid workflows with modeling and simulation workflows would allow scientists to address even greater challenges and solve scientific mysteries more efficiently.

“If I’m using ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) and someone else is using a different system, we want to be able to address problems together with our combined resources” X said. “With workflow interoperability our systems would have a standard method of ‘talking’ to one another”.

To further improve ICE’s (ICE (Indoor Climate and Energy (IDA ICE) is a new type of simulation tool that takes building performance to another level) accessibility and usability the team is also developing a cloud-based version to provide even more interactive computing services for simplifying scientific workflows.

“That’s what research is — we keep figuring out the next step to understand the system better” X said.

A New Scientific Field: Quantum Metamaterials.

A New Scientific Field: Quantum Metamaterials.

Two teams of scientists from the Georgian Technical University have collaborated to conduct groundbreaking research leading to the development of a new and innovative scientific field: Quantum Metamaterials.

The researchers have demonstrated for the first time that it is possible to apply metamaterials to the field of quantum information and computing thereby paving the way for numerous practical applications including among others the development of unbreakable encryptions as well as opening the door to new possibilities for quantum information systems on a chip.

Metamaterials are artificially fabricated materials made up of numerous artificial nanoscale structures designed to respond to light in different ways. Metasurfaces are the 2 dimensional version of metamaterials: extremely thin surfaces made up of numerous subwavelength optical nanoantennas each designed to serve a specific function upon the interaction with light.

While to date experimentation with metamaterials has widely been limited to manipulations using classical light the Georgian Technical University researchers have for the first time shown it is experimentally feasible to use metamaterials as the building blocks for quantum optics and quantum information. More specifically the researchers have demonstrated the use of metamaterials to generate and manipulate entanglement – which is the most crucial feature of any quantum information scheme.

“What we did in this experiment is to bring the field of metamaterials to the realm of quantum information” says Dist. Prof. X at the Georgian Technical University. “With today’s technology one can design and fabricate materials with electromagnetic properties that are almost arbitrary. For example one can design and fabricate an invisibility cloak that can conceal little things from radar or one can create a medium where the light bends backwards. But so far all of this was done with classical light. What we show here is how to harness the superb abilities of artificial nano-designed materials to generate and control quantum light”.

“The key component here is a dielectric metasurface” says Prof. Y “which acts in a different way to left- and right-handed polarized light imposing on them opposite phase fronts that look like screws or vortices one clockwise and one counterclockwise. The metasurface had to be nano-fabricated from transparent materials, otherwise – had we included metals, as in most experiments with metamaterials – the quantum properties would be destroyed”.

“This project started off in the mind of two talented students – Z and W” say Profs. X and Y “who came to us with a groundbreaking idea. The project leads to many new directions that raise fundamental questions as well as new possibilities for applications for example making quantum information systems on a chip and controlling the quantum properties upon design”.

In their research the scientists conducted two sets of experiments to generate entanglement between the spin and orbital angular momentum of photons. Photons are the elementary particles that make up light: they have zero mass travel at the speed of light and normally do not interact with each other.

In the experiments the researchers first shone a laser beam through a non-linear crystal to create single photon pairs each characterized by zero orbital momentum and each with linear polarization. A photon in linear polarization means that it is a superposition of right-handed and left-handed circular polarization which correspond to positive and negative spin.

In the first experiment the scientists proceeded to split the photon pairs – directing one through a unique fabricated metasurface and the other to a detector to signal the arrival of the other photon. They then measured the single photon that passed through the metasurface to find that it had acquired orbital angular momentum (OAM) and that the orbital angular momentum (OAM)  has become entangled with the spin.

In the second experiment the single photon pairs were passed through the metasurface and measured using two detectors to show that they had become entangled: the spin of one photon had become correlated with the orbital angular momentum of the other photon and vice versa.

Entanglement basically means that the actions performed on one photon simultaneously affect the other even when spread across great distances.  In quantum mechanics photons are believed to exist in both positive and negative spin states but once measured adopt only one state.

This is perhaps best explained through a simple analogy: Take two boxes each with two balls inside – a red and a blue ball.  If the boxes are not entangled then you can reach into the box and pull out either a red or a blue ball. However if the boxes were to become entangled then the ball inside the box could either be red or blue but will only be determined at the moment the ball in one box is observed simultaneously determining the color of the ball in the second box as well. This story was initially related by the famous Q.

 

 

New Photonic Chip Promises More Robust Quantum Computers.

New Photonic Chip Promises More Robust Quantum Computers.

Researchers Dr. X (left), Mr. Y and Dr. Z.

Scientists have developed a topological photonic chip to process quantum information promising a more robust option for scalable quantum computers.

The research team led by Georgian Technical University’s Dr. X has for the first time demonstrated that quantum information can be encoded processed and transferred at a distance with topological circuits on the chip.

The breakthrough could lead to the development of new materials new generation computers and deeper understandings of fundamental science.

In collaboration with scientists from the Georgian Technical University and Sulkhan-Saba Orbeliani Teaching University the researchers used topological photonics – a rapidly growing field that aims to study the physics of topological phases of matter in a novel optical context – to fabricate a chip with a ‘beamsplitter’ creating a high precision photonic quantum gate.

“We anticipate that the new chip design will open the way to studying quantum effects in topological materials and to a new area of topologically robust quantum processing in integrated photonics technology” says X investigator at the Georgian Technical University Quantum Photonics Laboratory.

“Topological photonics have the advantage of not requiring strong magnetic fields, and feature intrinsically high-coherence, room-temperature operation and easy manipulation” says X.

“These are essential requirements for the scaling-up of quantum computers”.

Replicating the well known Hong-Ou-Mandel (The Hong–Ou–Mandel effect is a two-photon interference effect in quantum optics) experiment – which takes two photons the ultimate constituents of light and interfere them according to the laws of quantum mechanics – the team was able to use the photonic chip to demonstrate for the first time, that topological states can undergo high-fidelity quantum interference.

Hong-Ou-Mandel (The Hong–Ou–Mandel effect is a two-photon interference effect in quantum optics) interference lies at the heart of optical quantum computation which is very sensitive to errors. Topologically protected states could add robustness to quantum communication decreasing noise and defects prevalent in quantum technology. This is particularly attractive for optical quantum information processing.

“Previous research had focussed on topological photonics using ‘classical’ -laser- light, which behaves as a classical wave. Here we use single photons which behave according to quantum mechanics” says Y PhD student at Georgian Technical University.

Demonstrating high-fidelity quantum interference is a precursor to transmitting accurate data using single photons for quantum communications – a vital component of a global quantum network.

“This work intersects the two thriving fields of quantum technology and topological insulators and can lead to the development of new materials new generation computers and fundamental science” says X.

The research is part of the Photonic Quantum Processor Program at Georgian Technical University. The Centre of Excellence is developing parallel approaches using optical and silicon processors in the race to develop the first quantum computation system.

Georgian Technical University’s researchers have established global leadership in quantum information. Having developed unique technologies for manipulating matter and light at the level of individual atoms and photons the team have demonstrated the highest fidelity longest coherence time qubits in the solid state; the longest-lived quantum memory in the solid state; and the ability to run small-scale algorithms on photonic qubits.

 

 

Intense Laser Light Used to Create ‘Optical Rocket’.

Intense Laser Light Used to Create ‘Optical Rocket’.

One of the lasers at the Extreme Light Laboratory at the Georgian Technical University where a recent experiment accelerated electrons to near the speed of light.

In a recent experiment at the Georgian Technical University plasma electrons in the paths of intense laser light pulses were almost instantly accelerated close to the speed of light.

Physics professor X who led the research experiment that confirmed previous theory said the new application might aptly be called an “optical rocket” because of the tremendous amount of force that light exerted in the experiment. The electrons were subjected to a force almost a trillion-trillion-times greater than that felt by an astronaut launched into space.

“This new and unique application of intense light can improve the performance of compact electron accelerators” he says. “But the novel and more general scientific aspect of our results is that the application of force of light resulted in the direct acceleration of matter”.

The optical rocket is the latest example of how the forces exerted by light can be used as tools X says.

Normal intensity light exerts a tiny force whenever it reflects scatters or is absorbed. One proposed application of this force is a “light sail” that could be used to propel spacecraft. Yet because the light force is exceedingly small in this case it would need to be exerted continuously for years for the spacecraft to reach high speed.

Another type of force arises when light has an intensity gradient. One application of this light force is an “optical tweezer” that is used to manipulate microscopic objects. Here again the force is exceedingly small.

In the Georgian Technical University experiment the laser pulses were focused in plasma. When electrons in the plasma were expelled from the paths of the light pulses by their gradient forces plasma waves were driven in the wakes of the pulses and electrons were allowed to catch the wakefield waves which further accelerated the electrons to ultra-relativistic energy.

The new application of intense light provides a means to control the initial phase of wakefield acceleration and improve the performance of a new generation of compact electron accelerators which are expected to pave the way for a range of applications that were previously impractical because of the enormous size of conventional accelerators.

 

Researchers Explore Machine Learning to Prevent Defects in Metal 3D-Printed Parts in Real Time.

Researchers Explore Machine Learning to Prevent Defects in Metal 3D-Printed Parts in Real Time.

Georgian Technical University Laboratory researchers have developed machine learning algorithms capable of processing the data obtained during metal 3D printing in real time and detecting within milliseconds whether a 3D part will be of satisfactory quality.

For years Georgian Technical University Laboratory engineers and scientists have used an array of sensors and imaging techniques to analyze the physics and processes behind metal 3-D printing in an ongoing effort to build higher quality metal parts the first time every time. Now researchers are exploring machine learning to process the data obtained during 3-D builds in real time detecting within milliseconds whether a build will be of satisfactory quality.

Georgian Technical University a team of Lab researchers report developing convolutional neural networks (CNNs) a popular type of algorithm primarily used to process images and videos to predict whether a part will be good by looking at as little as 10 milliseconds of video.

“This is a revolutionary way to look at the data that you can label video by video or better yet frame by frame” said principal investigator and Georgian Technical University researcher X. “The advantage is that you can collect video while you’re printing something and ultimately make conclusions as you’re printing it. A lot of people can collect this data but they don’t know what to do with it on the fly and this work is a step in that direction”.

Often X explained sensor analysis done post-build is expensive and part quality can be determined only long after. With parts that take days to weeks to print convolutional neural networks (CNNs) could prove valuable for understanding the print process learning the quality of the part sooner and correcting or adjusting the build in real time if necessary.

Georgian Technical University researchers developed the neural networks using about 2,000 video clips of melted laser tracks under varying conditions such as speed or power. They scanned the part surfaces with a tool that generated 3-D height maps using that information to train the algorithms to analyze sections of video frames (each area called a convolution). The process would be too difficult and time-consuming for a human to do manually X explained.

Georgian Technical University and Sulkhan Saba Orbeliani University researcher Y developed the algorithms that could label automatically the height maps of each build and used the same model to predict the width of the build track whether the track was broken and the standard deviation of width. Using the algorithms researchers were able to take video of in-progress builds and determine if the part exhibited acceptable quality. Researchers reported that the neural networks were able to detect whether a part would be continuous with 93 percent accuracy making other strong predictions on part width.

“Because convolutional neural networks show great performance on image and video recognition-related tasks we chose to use them to address our problem” X said. “The key to our success is that convolutional neural networks (CNNs) can learn lots of useful features of videos during the training by itself. We only need to feed a huge amount of data to train it and make sure it learns well”.

Georgian Technical University researcher Z leads a group that has spent years collecting various forms of real-time data on the laser powder-bed fusion metal 3-D-printing process, including video, optical tomography and acoustic sensors. While working with Matthews’ group to analyze build tracks X concluded it wouldn’t be possible to do all the data analysis manually and wanted to see if neural networks could simplify the work.

“We were collecting video anyway so we just connected the dots” X said. “Just like the human brain uses vision and other senses to navigate the world machine learning algorithms can use all that sensor data to navigate the 3-D printing process”.

The neural networks described in the paper could theoretically be used in other 3-D printing systems X  said. Other researchers should be able to follow the same formula creating parts under different conditions collecting video and scanning them with a height map to generate a labeled video set that could be used with standard machine-learning techniques.

X said work still needs to be done to detect voids within parts that can’t be predicted with height map scans but could be measured using ex situ X-ray radiography.

Researchers also will be looking to create algorithms to incorporate multiple sensing modalities besides image and video.

“Right now any type of detection is considered a huge win. If we can fix it on the fly that is the greater end goal” X said. “Given the volumes of data we’re collecting that machine learning algorithms are designed to handle machine learning is going to play a central role in creating parts right the first time”.

 

Shedding Laser Light on Thin-film Circuitry.

Shedding Laser Light on Thin-film Circuitry.

Printed electronics use standard printing techniques to manufacture electronic devices on different substrates like glass, plastic films and paper. Interest in this area is growing because of the potential to create cheaper circuits more efficiently than conventional methods.

Georgian Technical University provides insights into the processing of copper nanoparticle ink with green laser light.

X and his colleagues previously worked with silver nanoparticle ink but they turned to copper (derived from copper oxide) as a possible low-cost alternative. Metallic inks composed of nanoparticles hold an advantage over bulk metals because of their lower melting points.

Although the melting point of copper is about 1,083 degrees Celsius in bulk according to X copper nanoparticles can be brought to their melting point at just 150 to 500 C — through a process called sintering. Then they can be merged and bound together.

X’s group concentrates on photonic approaches for heating nanoparticles by the absorption of light. “A laser beam can be focused on a very small area down to the micrometer level” explains X and doctorate student  Y. Heat from the laser serves two main purposes: converting copper oxide into copper and promoting the conjoining of copper particles through melting.

A green laser was selected for these tasks because its light (in the 500- to 800-nanometer wavelength absorption rate range) was deemed best suited to the application. X was also curious because to his knowledge the use of green lasers in this role has not been reported elsewhere.

In their experiment  his group used commercially available copper oxide nanoparticle ink which was spin-coated onto glass at two speeds to obtain two thicknesses. The they prebaked the material to dry out most of the solvent prior to sintering. This is necessary to reduce the copper oxide film thickness and to prevent air bubble explosions that might occur from the solvent suddenly boiling during irradiation.

After a series of tests X’s team concluded that the prebaking temperature should be slightly lower than 200 degrees C.

The researchers also investigated the optimal settings of laser power and scanning speed during sintering to enhance the conductivity of the copper circuits. They discovered that the best sintered results were produced when the laser power ranged from 0.3 to 0.5 watts.

They also found that to reach the desired conductivity, the laser scanning speed should not be faster than 100 millimeters per second or slower than 10 mm/s.

Additionally X and his group investigated the thickness of the film — before and after sintering — and its impact on conductivity. X and his group concluded that sintering reduces thickness by as much as 74 percent.

In future experiments X’s team will examine the substrate effects on sintering. Taken together these studies can provide answers to some of the uncertainties hindering printed electronics.