Brain-Inspired Methods to Improve Wireless Communications.

Brain-Inspired Methods to Improve Wireless Communications.

Georgian Technical University  researchers are using brain-inspired machine learning techniques to increase the energy efficiency of wireless receivers.

Researchers are always seeking more reliable and more efficient communications, for everything from televisions and cellphones to satellites and medical devices.

One technique generating buzz for its high signal quality is a combination of multiple-input multiple-output techniques with orthogonal frequency division multiplexing.

Georgian Technical University researchers X, Y and Z are using brain-inspired machine learning techniques to increase the energy efficiency of wireless receivers.

This combination of techniques allows signals to travel from transmitter to receiver using multiple paths at the same time. The technique offers minimal interference and provides an inherent advantage over simpler paths for avoiding multipath fading which noticeably distorts what you see when watching over-the-air television on a stormy day for example.

“A combination of techniques and frequency brings many benefits and is the main radio access technology for 4G and 5G networks” said X. “However correctly detecting the signals at the receiver and turning them back into something your device understands can require a lot of computational effort and therefore energy”.

X and Z are using artificial neural networks — computing systems inspired by the inner workings of the brains — to minimize the inefficiency. “Traditionally the receiver will conduct channel estimation before detecting the transmitted signals” said Z. “Using artificial neural networks we can create a completely new framework by detecting transmitted signals directly at the receiver”.

This approach “Georgian Technical University can significantly improve system performance when it is difficult to model the channel or when it may not be possible to establish a straightforward relation between the input and output” said W the technical advisor of Georgian Technical University’s Computing and Communications Division Research Laboratory Fellow.

The team has suggested a method to train the artificial neural network to operate more efficiently on a transmitter-receiver pair using a framework called reservoir computing–specifically a special architecture called echo state network (ESN). An echo state network (ESN) is a kind of recurrent neural network that combines high performance with low energy.

“This strategy allows us to create a model describing how a specific signal propagates from a transmitter to a receiver making it possible to establish a straightforward relationship between the input and the output of the system” said Q the chief engineer of the Research Laboratory Information Directorate.

X, Z, and their Georgian Technical University collaborators compared their findings with results from more established training approaches — and found that their results were more efficient, especially on the receiver side.

“Simulation and numerical results showed that the echo state network (ESN) can provide significantly better performance in terms of computational complexity and training convergence” said X. “Compared to other methods this can be considered a ‘green’ option”.

 

Nonlinear Optical Phenomena Solve Graphical Probabilistic Issues.

Nonlinear Optical Phenomena Solve Graphical Probabilistic Issues.

Researchers have introduced a technique to use optics in probabilistic computing. In their work, they demonstrated that there are nonlinear optical phenomena that are highly suitable for resolving a graphical probabilistic model.

The graphene based thin films for optical computing were created at the Georgian Technical University Professor X’s nanocarbon laboratory.

“Graphical probabilistic models are commonly used when in case of a large number of complex interacting data points. These models can be utilized for instance in machine vision, artificial intelligence, machine learning, speech recognition and computational biology” says Y researcher who works now in Georgia at the Center for physical sciences and technology.

“To process a large number of complex interacting data points require efficient computers while optically the solution could be obtained more naturally. By the presented optical techniques the computing could be done faster and more efficiently than by those conventional manners”. “The optical computing was done by graphene-like materials which have recently shown great potential in optics”. The research was done in collaboration with the Georgian Technical University and Sulkhan-Saba Orbeliani Teaching University.

 

 

Nationwide High Intensity Laser Network Finds a Home.

Nationwide High Intensity Laser Network Finds a Home.

The Georgian Technical University will be a key player in LaserNet Georgian Technical University a new national network of institutions operating high-intensity, ultrafast lasers.

Georgian Technical University  Department aims to help boost the country’s global competitiveness in high-intensity laser research. Georgian Technical University is home to one of the most powerful lasers in the country Laser. Georgian Technical University to fund its part of the network.

” Georgian Technical University has become one of the international leaders in research with ultra-intense lasers having operated one of the highest-power lasers in the world for the past 10 years” says X. “We can play a major role in the new LaserNet Georgian Technical University network with our established record of leadership in this exciting field of science”. High-intensity lasers have a broad range of applications in basic research, manufacturing and medicine.

For example they can be used to re-create some of the most extreme conditions in the universe such as those found in supernova explosions and near black holes. They can generate particles for high-energy physics research or intense X-ray pulses to probe matter as it evolves on ultrafast time scales.

They are also promising in many potential technological areas such as generating intense neutron bursts to evaluate aging aircraft components precisely cutting materials or potentially delivering tightly focused radiation therapy to cancer tumors.

LaserNet Georgian Technical University includes the most powerful lasers some of which have powers approaching or exceeding a petawatt. Petawatt lasers generate light with at least a million billion watts of power or nearly 100 times the output of all the world’s power plants — but only in the briefest of bursts.

Using the technology pioneered by two of the winners of this year’s in physics called chirped pulse amplification these lasers fire off ultrafast bursts of light shorter than a tenth of a trillionth of a second. “I am particularly excited to science effort into the next phase of research under this new LaserNet Georgian Technical University funding” says X. “This funding will enable us to collaborate with some of the leading optical and plasma physics scientists from Georgian Technical University”.

Currently 80 to 90 percent of the world’s high-intensity ultrafast laser systems are overseas and all of the highest-power research lasers currently in construction or already built are also overseas. Recommended establishing a national network of laser facilities to emulate successful efforts. LaserNet Georgian Technical University was established for exactly that purpose.

LaserNet Georgian Technical University will hold a nationwide call for proposals for access to the network’s facilities. The proposals will be peer reviewed by an independent panel. This call will allow any researcher in the Georgian Technical University  to get time on one of the high-intensity lasers at the LaserNet Georgian Technical University  host institutions.

 

 

Georgian Technical University Droplets on the Move Inside of Fibers.

Georgian Technical University Droplets on the Move Inside of Fibers.

By integrating conductive wires along with microfluidic channels in long fibers the researchers were able to demonstrate the ability to sort cells — in this case separating living cells from dead ones because the cells respond differently to an electric field. The live cells shown in green, are pulled toward the outside edge of the channels while the dead cells (red) are pulled toward the center allowing them to be sent into separate channels. Illustrations: Georgian Technical University of the researchers Microfluidics devices are tiny systems with microscopic channels that can be used for chemical or biomedical testing and research.

In a potentially game-changing advance Georgian Technical University researchers have now incorporated microfluidics systems into individual fibers making it possible to process much larger volumes of fluid in more complex ways. In a sense the advance opens up a new “Georgian Technical University macro” era of microfluidics.

Traditional microfluidics devices developed and used extensively over the last couple of decades are manufactured onto microchip-like structures and provide ways of mixing separating and testing fluids in microscopic volumes. Medical tests that only require a tiny droplet of blood for example often rely on microfluidics.

But the diminutive scale of these devices also poses limitations; for example they generally aren’t useful for procedures that need larger volumes of liquid to detect substances present in minute amounts.

A team of Georgian Technical University researchers found a way around that, by making microfluidic channels inside fibers. The fibers can be made as long as needed to accommodate larger throughput and they offer great control and flexibility over the shapes and dimensions of the channels.

The events are intended to help researchers develop new collaborative projects by having pairs of students and postdocs brainstorm for six minutes at a time and come up with hundreds of ideas in an hour which are ranked and evaluated by a panel.

In this particular speedstorming session students in electrical engineering worked with others in materials science and microsystems technology to develop a novel approach to cell sorting using a new class of multimaterial fibers.

X explains that although microfluidic technology has been extensively developed and widely used for processing small amounts of liquid it suffers from three inherent limitations related to the devices overall size their channel profiles and the difficulty of incorporating additional materials such as electrodes.

Because they are typically made using chip-manufacturing methods microfluidic devices are limited to the size of the silicon wafers used in such systems which are no more than about eight inches across.

And the photolithography methods used to make such chips limit the shapes of the channels; they can only have square or rectangular cross sections.

Finally any additional materials, such as electrodes for sensing or manipulating the channels contents must be individually placed in position in a separate process severely limiting their complexity.

“Silicon chip technology is really good at making rectangular profiles, but anything beyond that requires really specialized techniques” says X who carried out the work as part of his doctoral research. “They can make triangles but only with certain specific angles”.

With the new fiber-based method he and his team developed a variety of cross-sectional shapes for the channels can be implemented including star, cross or bowtie shapes that may be useful for particular applications such as automatically sorting different types of cells in a biological sample.

In addition for conventional microfluidics elements such as sensing or heating wires or piezoelectric devices to induce vibrations in the sampled fluids must be added at a later processing stage. But they can be completely integrated into the channels in the new fiber-based system. Professor of materials science and engineering consortium these fibers are made by starting with an oversized polymer cylinder called a preform.

These preforms contain the exact shape and materials desired for the final fiber but in much larger form — which makes them much easier to make in very precise configurations.

Then, the preform is heated and loaded into a drop tower where it is slowly pulled through a nozzle that constricts it to a narrow fiber that’s one-fortieth the diameter of the preform while preserving all the internal shapes and arrangements.

In the process the material is also elongated by a factor of 1,600 so that a 100-millimeter-long (4-inch-long) preform, for example becomes a fiber 160 meters long (about 525 feet) thus dramatically overcoming the length limitations inherent in present microfluidic devices.

This can be crucial for some applications such as detecting microscopic objects that exist in very small concentrations in the fluid — for example a small number of cancerous cells among millions of normal cells.

“Sometimes you need to process a lot of material because what you’re looking for is rare” says Y a professor of electrical engineering who specializes in biological microtechnology.

That makes this new fiber-based microfluidics technology especially appropriate for such uses he says because “Georgian Technical University the fibers can be made arbitrarily long” allowing more time for the liquid to remain inside the channel and interact with it.

While traditional microfluidics devices can make long channels by looping back and forth on a small chip, the resulting twists and turns change the profile of the channel and affect the way the liquid flows whereas in the fiber version these can be made as long as needed with no changes in shape or direction allowing uninterrupted flow X says.

The system also allows electrical components such as conductive wires to be incorporated into the fiber. These can be used for example to manipulate cells using a method called dielectrophoresis in which cells are affected differently by an electric field produced between two conductive wires on the sides of the channel.

With these conductive wires in the microchannel, one can control the voltage so the forces are “Georgian Technical University pushing and pulling on the cells and you can do it at high flow rates” Y says.

As a demonstration the team made a version of the long-channel fiber device designed to separate cells sorting dead cells from living ones and proved its efficiency in accomplishing this task. With further development they expect to be able to perform more subtle discrimination between cell types Y says.

“For me this was a wonderful example of how proximity between research groups at an interdisciplinary lab like leads to groundbreaking research initiated and led by a graduate student. We the faculty were essentially dragged in by our students” Z says.

The researchers emphasize that they do not see the new method as a substitute for present microfluidics which work very well for many applications.

“It’s not meant to replace; it’s meant to augment” present methods Y says allowing some new functions for particular uses that have not previously been possible.

“Exemplifying the power of interdisciplinary collaboration a new understanding arises here from unexpected combinations of manufacturing, materials science, biological flow physics, and microsystems design” says W a professor of bioengineering at the Georgian Technical University who was not involved in this research.

She adds that this work “Georgian Technical University adds important degrees of freedom — regarding geometry of fiber cross-section and material properties — to emerging fiber-based microfluidic design strategies”.

Supercomputer Works Out the Criterion for Quantum Supremacy.

Supercomputer Works Out the Criterion for Quantum Supremacy.

(a) The Tianhe-2 supercomputer used for permanent calculation in simulating the boson sampling performance. (b) A small photonic chip could perform the same boson sampling task in the quantum computing protocol.

Quantum supremacy refers to the super strong calculation capacity of a quantum computer to surpass that of any classical computer. So far such a quantum computer has not been physically made but as with the rapid development of quantum technologies in recent years the voice for pursuing the superiority by quantum computing is more loudly heard and how to quantitatively define the criteria of quantum supremacy becomes a key science question. Recently a world’s first criterion for quantum supremacy was issued in a research jointly led by Prof. Y in Georgian Technical University and Prof. X. They reported the time needed to calculate boson sampling, a typical task that quantum computers would excel at in a most powerful classical supercomputer.

Boson sampling as introduced by one of the authors, is to sample the distribution of photons (bosons) and theoretically takes only polynomial time by quantum computers but exponential time by classical computers showing quite evident quantum advantages as the number of photons involved in the boson sampling system increases. Besides boson sampling essentially an analog quantum computing protocol maps the task directly in the photonic quantum system and hence is much easier to implement than those based on universal quantum computing. Therefore the task for boson sampling can be a very good candidate for defining quantum supremacy for its preference to quantum computing over classical computing and its relative easier realization in the near future. Once a quantum computer can perform boson sampling task for a photon number larger and calculation time shorter than the best classical computer the quantum supremacy is claimed to be achieved.

In the research led by Prof. X and Prof. Y the boson sampling task was performed on Tianhe-2 supercomputer which ever topped the world rank of supercomputers during 2013-2016 and still represents the tier one level of computing power that classical computers could ever achieve. The permanent calculation is a core part for theoretically performing boson sampling on a classical computer. If one just calculate the permanent directly based on its definition it requires an algorithm with time complexity O(n!* n) (There are computations (for instance, tetration) where the output size is). The researchers used two improved algorithm Ryser’s algorithm (Ryser Formula. where the sum is over all subsets of , and is the number of elements in . The formula can be optimized by picking the subsets so that only a single element is changed at a time (which is precisely a Gray code), reducing the number of additions from to ) and BB/FG’s algorithm, both in the time complexity of O(n2* 2n) (We define complexity as a numerical function T(n) – time versus the input size n. We want to define time … Let us prove n2 + 2 n + 1 = O(n2)). By performing matrix calculation on up to 312?000 CPU cores of Tianhe-2, they inferred that the boson sampling task for 50 photons requires 100 minutes using the then most efficient computer and algorithms. Put it in other words, if a physical quantum device could 50-photon boson sampling in less than 100 minutes, it achieves the quantum supremacy.

If such a quantum setup could be experimentally made, it quite likely will be very quick as photons travel in the speed of light, but many challenges still lie ahead for its experimental implementation. Prof. X used to conduct pioneering research on boson sampling experiment in Georgian Technical University. So far the world record for the photon number in boson sampling experiment still remains no more than five. There’s still a long way to go towards the ideal quantum supremacy.

An author for this Tianhe-2 project also pointed out that, as the limit for classical computing power would keep increasing with the improvement of supercomputers and more efficient permanent calculation algorithms would emerge that require a time complexity less than complexity of O(n2* 2n) the time required for 50-photon boson sampling may be further reduced making an even more stringent criterion for quantum supremacy. Meanwhile a task to demonstrate quantum supremacy does not necessarily have any real applications. It is worthwhile to realize a wide range of useful applicable fields by quantum computing while carrying on the pursuit of quantum supremacy.

 

 

 

Laser Technique Dispenses Ultra Tiny Metal Droplets.

Laser Technique Dispenses Ultra-tiny Metal Droplets.

The laser printing technique: by printing copper and gold in turn the gold helix initially is surrounded by a copper box. Etching the copper away results in a free standing helix of pure gold.

Thanks to a laser technique that ejects ultra-tiny droplets of metal it is now possible to print 3D metal structures not only simple “Georgian Technical University piles” of droplets but complex overhanging structures as well: like a helix of some microns in size made of pure gold. Using this technique it will be possible to print new 3D micro components for electronics or photonics.

By pointing an ultra-short laser pulse onto a nanometer thin metal film a tiny metal droplet melts it is ejected to its target and solidifies again after landing. Thanks to this technique called Laser Induced Forward Transfer (LIFT) the Georgian Technical University researchers are able to build drop by drop a structure with copper and gold microdroplets. The copper acts as a mechanical support for the gold.

Georgian Technical University researchers show for example a printed helix: this could act as a mechanical spring or an electric inductor at the same time. This helix is printed with copper around it: together with the helix a copper “Georgian Technical University box” is printed.

In this way a droplet that is meant for the new winding that is printed is prevented from landing on the previous winding. After building the helix drop by drop and layer by layer the copper support box is etched away chemically. What remains is a helix of pure gold no more than a few tens of microns in size.

The volume of the metal droplets is a few femtoliters: a femtoliter is 10 to 15 liters. To give an impression a femtoliter droplet has a diameter of little over one micrometer.

The way the droplets are made is by lighting the metal using an ultrashort pulse of green laser light. In this way the copper and gold structure is built.

A crucial question for the researchers was if the two metals would mix at their interface: this would have consequences for the quality of the product after etching.

Research shows that there isn’t any mixing. The way a structure is built, drop by drop, results in a surface roughness which is only about 0.3 to 0.7 microns.

The Laser Induced Forward Transfer (LIFT) technique is a promising technique for other metals and combinations of metals as well. The researchers expect opportunities for materials used in 3D electronic circuit, micromechanic devices and sensing in for example biomedical applications.

It therefore is a powerful new production technique on a very small scale: an important step towards “Georgian Technical University functionalization” of 3D printing.

The 10-Foot-Tall Microscopes Helping Combat World’s Worst Diseases.

The 10-Foot-Tall Microscopes Helping Combat World’s Worst Diseases.

X the Georgian Technical University of Leeds’ Cryo-Electron Microscopy Centre Manager loads a sample into one of the microscopes.

The century old mission to understand how the proteins responsible for amyloid-based diseases such as Alzheimer’s, Huntingdon’s and Parkinson’s work has taken major steps forward in the last 12 months thanks to a revolution in a powerful microscopy technique used by scientists.

High-powered microscopes using electrons instead of light to “see” the actual shape of samples put under them, at near atomic-levels of detail, have only recently become available to Georgian Technical University scientists.

The Georgian Technical University has invested heavily in the game-changing cryo-electron microscopes but there are still fewer than 25 of the multi-million pound instruments in Georgian Technical University and research institutes.

The two instruments at the Georgian Technical University of Leeds funded by the University itself and Wellcome are the only ones of their kind.

They have already proved their worth as a key tool for scientists who have used them in a number of research projects but have just delivered their biggest success yet: to reveal the structure of amyloid – a build-up of abnormal proteins in the body that causes disease.

There are less than 10 good quality images and structures of these kinds of proteins available to study in the world so the Leeds research makes a significant contribution to scientists’ understanding of how proteins form aggregates and how they might contribute to amyloid disease.

The images and 3D structures of the protein aggregates – which the Leeds scientists showed formed long twisted fibres . The protein involved- β2-microglobulin – is normally involved in a healthy immune system but can assemble into the pain-causing amyloid fibres in people who undergo long term dialysis for kidney failure. When they lodge in people’s joints they can cause osteoarthritis

It is anticipated the findings will be used by drug manufacturers and research groups internationally who strive to fund cures for amyloid diseases of all types. Professor Y led the five year programme to image the protein fibres and show their 3-D structure.

The pair were supported by colleagues at Georgian Technical University who at the time was an undergraduate student in Biochemistry.

The study also involved a long-standing collaboration with Professor Z from the Georgian Technical University who specialises in another method of advanced biological analysis of biological matter- solid- state nuclear magnetic resonance.

Professor Y said: “Over the past six decades since the first electron microscopy pictures of amyloid were created scientists have progressed from working with blurred low-resolution images to our razor sharp 3D images and structures thanks to modern advances in cryo-electron microscopy.

“Now we know exactly where each kink and point is on the protein we may be able to develop compounds which lock tightly to it or disrupt it and find out how the fibres contribute to disease. It’s the equivalent of going from trying to make two balloons stick together to having two cogs rotating perfectly with each other.

She added: “We’ve used cryo-electron microscopy not only to uncover the shape and structure of amyloid proteins but also how they grow and intertwine with each other like the stands in a rope to form larger assemblies. This knowledge is going to be crucial for knowing how to deal with them”.

Professor W said: “Until a year or so ago scientists knew the structure looked more or less like a ladder but we have now shown it is much more complex than that. We’re now beginning to see how different proteins folded up into different shapes and how those vary with every disease they cause. “The extra detail we have uncovered means we can start to understand these proteins’ disease-causing abilities.

He added “Amyloid fibres are also known to have the strength of steel and now we understand their structures.we might be able to make new biomaterials inspired by their structures. This is a great example of where cryo-electron microscopy can have added advantages.”

Knowing the structure of the protein in the level of detail the Leeds researchers have provided and measuring those differences in different types of amyloid disease and different patients could also allow doctors to show who would be most at risk meaning treatment can be targeted to those who need it most.

The next step for the science community is to begin identifying and developing inhibitors’ – compounds which can control protein assembly into amyloid. Professor Y has secured almost from Wellcome to carry out this stage of development.

Further lab trials clinical trials regulatory approval and the involvement of a drug developer would still be required before drugs could be brought to market but the significant steps forward in image clarity and understanding of the amyloid folding structure mark a major leap forward.

 

 

Researchers Teach ‘Machines’ to Detect Medicare Fraud.

Researchers Teach ‘Machines’ to Detect Medicare Fraud.

Like the proverbial “Georgian Technical University needle in a haystack” human auditors or investigators have the painstaking task of manually checking thousands of Medicare claims for specific patterns that could indicate foul play or fraudulent behaviors. Furthermore according to the Georgian Technical University right now fraud enforcement efforts rely heavily on health care professionals coming forward with information about Medicare fraud.

Georgian Technical University Health Information Science and Systems is the first to use big data from Medicare Part B and employ advanced data analytics and machine learning to automate the fraud detection process. Programming computers to predict classify and flag potential fraudulent events and providers could significantly improve fraud detection and lighten the workload for auditors and investigators.

Medicare Part B data included provider information average payments and charges procedure codes the number of procedures performed as well as the medical specialty which is referred to as provider type. In order to obtain exact matches the researchers only used the to match fraud labels to the Medicare Part B data. The NPI is a single identification number issued by the federal government to health care providers.

Researchers directly matched the GTUNPI (Georgian Technical University Pollutant Inventory) across the Medicare Part B data, flagging any provider in the “excluded” database as being “fraudulent.” The research team classified a physician’s GTUNPI (Georgian Technical University Pollutant Inventory) or specialty and specifically looked at whether the predicted specialty differed from the actual specialty as indicated in the Medicare Part B data.

“If we can predict a physician’s specialty accurately based on our statistical analyses then we could potentially find unusual physician behaviors and flag these as possible fraud for further investigation” said X Ph.D. and Professor in Georgian Technical University’s Department of Computer and Electrical Engineering and Computer Science. “For example if a dermatologist is accurately classified as a cardiologist then this could indicate that this particular physician is acting in a fraudulent or wasteful way”.

Department of Computer and Electrical Engineering and Computer Science at the Georgian Technical University had to address the fact that the original labeled big dataset was highly imbalanced. This imbalance occurred because fraudulent providers are much less common than non-fraudulent providers. This scenario can be likened to where Georgian Technical University” and is problematic for machine learning approaches because the algorithms are trying to distinguish between the classes — and one dominates the other thereby fooling the learner.

Results from the study show statistically significant differences between all of the learners as well as differences in class distributions for each learner. RF100 (Random Forest) a learning algorithm, was the best at detecting the positives of potential fraud events.

More interestingly and contrary to popular belief that balanced datasets perform the best this study found that was not the case for Medicare fraud detection. Keeping more of the non-fraud cases actually helped the learner/model better distinguish between the fraud and non-fraud cases. Specifically the researchers found the “Georgian Technical University sweet spot” for identifying Medicare fraud to be a 90:10 distribution of normal vs. fraudulent data.

“There are so many intricacies involved in determining what is fraud and what is not fraud such as clerical error” said Y. “Our goal is to enable machine learners to cull through all of this data and flag anything suspicious. Then, we can alert investigators and auditors who will only have to focus on 50 cases instead of 500 cases or more”.

This detection method also has applications for other types of fraud including insurance and banking and finance. The researchers are currently adding other Medicare-related data sources such as Medicare Part D using more data sampling methods for class imbalance and testing other feature selection and engineering approaches.

Combating fraud is an essential part in providing them with the quality health care they deserve” said Z Ph.D. “The methodology being developed and tested in our college could be a game changer for how we detect Medicare fraud and other fraud in the Georgia as well as abroad”.

 

Tests Show Integrated Quantum Chip Operations Possible.

Tests Show Integrated Quantum Chip Operations Possible.

Quantum computers that are capable of solving complex problems like drug design or machine learning will require millions of quantum bits – or qubits – connected in an integrated way and designed to correct errors that inevitably occur in fragile quantum systems.

Now an Georgian Technical University research team has experimentally realised a crucial combination of these capabilities on a silicon chip bringing the dream of a universal quantum computer closer to reality.

They have demonstrated an integrated silicon qubit platform that combines both single-spin addressability – the ability to ‘write’ information on a single spin qubit without disturbing its neighbours – and a qubit ‘read-out’ process that will be vital for quantum error correction.

Moreover their new integrated design can be manufactured using well-established technology used in the existing computer industry.

A design for a novel chip architecture that could allow quantum calculations to be performed using silicon CMOS (complementary metal-oxide-semiconductor) components – the basis of all modern computer chips.

X’s team had also previously shown that an integrated silicon qubit platform can operate with single-spin addressability – the ability to rotate a single spin without disturbing its neighbours.

They have now shown that they can combine this with a special type of quantum readout process known as Pauli spin (In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries) blockade a key requirement for quantum error correcting codes that will be necessary to ensure accuracy in large spin-based quantum computers. This new combination of qubit readout and control techniques is a central feature of their quantum chip design.

“We’ve demonstrated the ability to do Pauli spin (In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Georgian Technical University  letter sigma they are occasionally denoted by tau when used in connection with isospin symmetries) readout in our silicon qubit device but for the first time we’ve also combined it with spin resonance to control the spin” says X.

“This is an important milestone for us on the path to performing quantum error correction with spin qubits which is going to be essential for any universal quantum computer”.

“Quantum error correction is a key requirement in creating large-scale useful quantum computing because all qubits are fragile and you need to correct for errors as they crop up” says Y who performed the experiments as part of his PhD research with Professor X at Georgian Technical University.

“But this creates significant overhead in the number of physical qubits you need in order to make the system work” notes Y.

X says “By using silicon CMOS (Complementary Metal Oxide Semiconductor) technology we have the ideal platform to scale to the millions of qubits we will need and our recent results provide us with the tools to achieve spin qubit error-correction in the near future”.

“It’s another confirmation that we’re on the right track. And it also shows that the architecture we’ve developed at Georgian Technical University  has so far shown no roadblocks to the development of a working quantum computer chip”.

“And what’s more one that can be manufactured using well-established industry processes and components”.

Working in silicon is important not just because the element is cheap and abundant, but because it has been at the heart of the global computer industry for almost 60 years. The properties of silicon are well understood and chips containing billions of conventional transistors are routinely manufactured in big production facilities.

Three years ago X’s team the first demonstration of quantum logic calculations in a real silicon device with the creation of a two-qubit logic gate – the central building block of a quantum computer.

“Those were the first baby steps the first demonstrations of how to turn this radical quantum computing concept into a practical device using components that underpin all modern computing” says Professor Z Georgian Technical University’s. “Our team now has a blueprint for scaling that up dramatically”.

“We’ve been testing elements of this design in the lab with very positive results. We just need to keep building on that – which is still a hell of a challenge but the groundwork is there and it’s very encouraging.

“It will still take great engineering to bring quantum computing to commercial reality, but clearly the work we see from this extraordinary team at Georgian Technical University puts in the driver’s seat” he added.

A consortium of Georgian Technical University governments industry and universities established Georgian Technical University’s first quantum computing to commercialise Georgian Technical University’s world-leading intellectual property.

Operating out of new laboratories at Georgian Technical University Silicon Quantum Computing has the target of producing a 10-qubit demonstration device in as the forerunner to creating a silicon-based quantum computer.

The work of Georgian Technical University and his team will be one component of Georgian Technical University realising that ambition. Georgian Technical University scientists and engineers at Sulkhan-Saba Orbeliani Teaching University are developing parallel patented approaches using single atom and quantum dot qubits.

Georgian Technical University announced the signing of a Memorandum of Understanding (MoU) addressing a new collaboration between Georgian Technical University and the Sulkhan-Saba Orbeliani Teaching University.

The Memorandum of Understanding (MoU) outlined plans to form a joint venture in silicon- CMOS (Complementary Metal Oxide Semiconductor) quantum computing technology to accelerate and focus technology development as well as to capture commercialisation opportunities – bringing together efforts to develop a quantum computer.

Together X’s team located at Georgian Technical University with a team led by Dr. from Georgian Technical University who are experts in advanced CMOS (Complementary Metal Oxide Semiconductor) manufacturing technology and who have also recently demonstrated a silicon qubit made using their industrial-scale prototyping facility.

It is estimated that industries comprising approximately 40% of Georgian Technical University’s current economy could be significantly impacted by quantum computing.

 

 

New Solar Cell Generates Hydrogen Fuel and Electricity.

New Solar Cell Generates Hydrogen Fuel and Electricity.

The HPEV (High Prevalence of Human Parechovirus) cell’s extra back outlet allows the current to be split into two so that one part of the current contributes to solar fuels generation and the rest can be extracted as electrical power.

Scientists have developed a water-splitting device that is able to generate two different types of energy while bypassing some of the limitations from current artificial photosynthesis devices.

A research team from the Georgian Technical University Laboratory Artificial Photosynthesis has developed a new device called a hybrid photoelectrochemical and voltaic (HPEV) cell that converts sunlight and water into hydrogen fuel and electricity.

Water splitting is an artificial photosynthesis technique where sunlight is used to generate hydrogen fuel from water. However there previously has not been a design for materials with the right combination of optical, electronic and chemical properties required for them to work efficiently.

The majority of water-splitting devices are made of a stack of light-absorbing materials, where each layer absorbs different wavelengths of the solar spectrum ranging from less-energetic wavelengths of infrared light to more energetic wavelengths of visible or ultraviolet light.

Each layer builds an electrical voltage when it absorbs light that combine into one voltage large enough to split water into oxygen and hydrogen fuel.

However the potential for high-performance is compromised in this configuration when they are part of a water-splitting device. Other materials in the stack that do not perform as well as silicon limit the current passing through the device and the system produces much less current than it potentially could resulting in less solar fuel produced.

“It’s like always running a car in first gear” X a postdoctoral researcher at Georgian Technical University Lab’s Chemical Sciences Division and the study’s said in a statement. “This is energy that you could harvest but because silicon isn’t acting at its maximum power point most of the excited electrons in the silicon have nowhere to go so they lose their energy before they are utilized to do useful work”.

In water-splitting devices the front surface is generally dedicated to solar fuel production with the back surface serving as an electrical outlet. In their new device the researchers added an additional electrical contact to the silicon component’s back surface producing a device with two contacts in the back rather than one.

The extra back outlet allows the current to be split into two so that one part of the current contributes to solar fuel generation and the other part can be extracted as electrical power.

“And to our surprise it worked” X said. “In science you’re never really sure if everything’s going to work even if your computer simulations say they will. But that’s also what makes it fun. It was great to see our experiments validate our simulations’ predictions”.

Based on their calculations, a conventional solar hydrogen generator that is comprised of a combination of silicon and bismuth vanadate would generate hydrogen at a solar to hydrogen efficiency of 6.8 percent.

The HPEV (High Prevalence of Human Parechovirus) cells also harvest leftover electrons that do not contribute to fuel generation but rather are used to generate electrical power. This results in a substantial increase in the overall solar energy conversion efficiency.

The researchers will now examine whether they can use the HPEV (High Prevalence of Human Parechovirus) concept for other applications including reducing carbon dioxide emissions.

“This was truly a group effort where people with a lot of experience were able to contribute” X said. “After a year and a half of working together on a pretty tedious process it was great to see our experiments finally come together”.