Category Archives: HPC/Supercomputing

Saving The Planet With Supercomputing: Researchers Search For Greener Catalysts, Energy Sources And Batteries.

Saving The Planet With Supercomputing: Researchers Search For Greener Catalysts, Energy Sources And Batteries.

The 225 cluster supercomputer is installed at the Georgian Technical University “We are part of a worldwide scientific community trying to find new ways to help humanity produce energy and other things in more efficient ways that help our planet” stated X.

Catalysts make things happen. Throughout nature and industry they are used to accelerate chemical reactions to go from one chemical to another and then another and another and so on until we have the result we want such as ammonia. We have catalysts in our bodies that are called enzymes and most of us sit on a catalyst every day as we commute to and from work or school. It’s the catalytic converter in our automobiles which changes harmful gases such as nitrous oxide and carbon monoxide  produced by the fossil fuel burning combustion engine, to something less toxic.

“Something else that is extremely important related to catalysts is the production of fertilizers used in agriculture” explained X. “Fertilizers are basically ammonia NH3 (Ammonia or azane is a compound of nitrogen and hydrogen with the formula NH3. It is a colourless gas with a characteristic pungent smell. Ammonia contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to foodand fertilizers) which is mostly nitrogen”. “There are millions of tons of ammonia produced worldwide every year” continued X.

It takes a lot of energy to produce ammonia — with an environment of up to 900 deg. C and 100 atmospheres of pressure — and the process releases enormous amounts of of CO2 (Carbon dioxide is a colorless gas with a density about  60% higher than that of dry air. Carbon dioxide consists of a carbon atom covalently double bonded to two oxygen atoms. It occures naturally in Earth’s atmosphere as a trance gas). “Fertilizer is the reason the global population has been able to expand as much as it has since by growing the crops to feed the world. So it’s very important that we are able to continue to produce ammonia cheaply and with less impact on the planet”. And where do scientists look for more efficient fertilizer production ? How about peas ?

“It is well known that many plants such as peas can actually produce ammonia in the roots of the plant” explained X. “Researchers know what the plants are doing how plants produce nitrogen so they are trying to figure out if we can mimic that in a scalable industrial process”. It takes the right catalysts to do that. And so the hunt for new catalysts proceeds.

There are about 100 elements in nature. But they can form billions of different materials and chemicals. Which of these possible substances can create the chemical environment needed to form good catalysts for something like making fertilizer or producing the myriads of other substances we need such as safe fuels, and will cause less harm to our earth ? And can theses catalysts be made safely, reliably and economically ?

“Quantum mechanical calculations help us understand how catalysis works at the atomic scale” commented Professor Y. “Whether we’re working with experimental researchers actively developing new materials or we’re screening from the many possible chemical combinations to find new catalysts that we can produce experimentally we need the supercomputer to understand how their atoms are reacting with other substances”.

The cluster makes these calculations extremely fast. But depending on how many millions of substances they’re considering, how many atoms are in the molecules being studied, and the depth to which they optimize the molecular structures, it could take hours to weeks to months before Professor Y has an answer about a single catalyst.

“Out of the million possible chemical combinations, the supercomputer might show 20,000 possible catalysts” explains X. “But when we add criteria to the results such as how stable they are whether or not they can be produced economically if they produce unsafe byproducts including radioactive materials and how selectively they produce the result we want the computer might end up identifying ten or 20 catalysts that we can then experiment with. To do that in the lab without a computer might take hundreds of chemists thousands of years”. Besides catalysis purely between atoms catalytic materials can react with other particles such as photons releasing electrons that can be used for energy creation.

The search for alternative energy sources is a global endeavor electrified by a deep concern for the planet. While electrically powered car research and production is advancing for cars the energy content of batteries is low compared to cleaner fuels such as hydrogen. It takes clean fuels like these to run large machinery including aircraft for long periods. So the world needs non-fossil fuels. And we need to produce them without using fossil fuels. Scientists are mining chemistry at the atomic level looking for methods to efficiently create non-fossil fuels. One of these searches involves turning water into hydrogen. For decades water has been a source for generating electricity. But ProfessorY — as are many researchers around the world — is looking for materials that will use solar energy to split water to mine its hydrogen.

Hydrogen is an abundant fuel source that burns cleanly” commented Y. “And water is mostly hydrogen. When photons from the sun hit materials with certain atomic characteristics electrons are excited with enough energy to be used for molecular reactions such as splitting water into hydrogen and oxygen. Creating materials that do this efficiently make it economically feasible and non-toxic is why researchers are working on this today”.

The challenge is that the materials need to be safe abundant and can be used to produce hydrogen more competitively than is done commercially today. “For example platinum works efficiently but it is scarce and very expensive. So we might be able to make materials that can be used to split water efficiently but not at large scales because we don’t have enough of the right resources or they’re just too expensive to produce”. According to Y water splitting is not new but the search for materials to do water splitting from light has intensified over the last years.

“We have recently looked at sulfides” continued Y “where you have a composition of two different metal atoms and three sulfur atoms in a structure that is periodically repeated. We used the supercomputer to screen for thousands of different materials that could be used. We found a fairly short list of about 15 materials. One of these was then made by an experimental group at Georgian Technical University and it turned out to have some promising properties for water splitting”. Their work continues using the cluster to search for other candidates.

While the search for safe fuels continues batteries are still a critical source for storing energy. Natural energy sources such as wind and solar do not always produce when we need to consume the energy. But batteries have a limited life themselves and limited capacity. We need more efficient materials that provide denser energy storage.

Batteries have very complex electrochemistry happening inside. Inside the battery atoms from the anode want to travel to the cathode but the electrolyte only allows transport of ions so the electrons have to travel through the external circuit instead. The electrolyte is essentially straddling a lot of energy between the electrodes. That tends to degrade the interfaces between the electrodes and the electrolyte causing electrochemical limitations across this interface.

“The reactions and limitations at these interfaces in lithium-ion batteries and how the so-called Electrolyte Interphases work are still a puzzle in the battery community” explained Professor Z. “The limitations can result in reduced battery life over time and other unwanted characteristics”. Adding to the complexity is that chemical reactions and limitations are different for each type of battery.

“In designing next-generation and next-next-generation batteries like metal-oxygen and metal-sulfur cells the interfacial reactions and limitations are completely different. We’re using the supercomputer to identify these rate-limiting steps and the reactions that take place at these interfaces. Once we know what the fundamental limitations are we can start to do inverse design of the materials to circumvent these reactions. We can improve the efficiency and durability of the batteries”. The challenge Professor Z’s team has with modeling these reactions is with the time scales of the reactions and the size of their supercomputer.

“Modeling the dynamics at these interfaces requires time-steps in the simulation on the order of femtoseconds” added Professor Z. A femtosecond is 10−15 or 1/1,000,000,000,000,000 of a second. “The design of the materials might be for a ten-year battery life. Full quantum chemical calculations over a ten-year life cycle for several molecular reactions of atoms would take an enormous amount of computing power”. Or billions of years of waiting with a smaller supercomputer. “We’re consistently pushing the time and length scales with our computing cluster to help us understand what is happening at these interfaces. And with this system we are doing much more than we could before with an older machine”. One interesting area Z is working on is developing a disordered material for use as an electrode in lithium-ion batteries using artificial intelligence/machine learning on the cluster.

“Computational predictions for disordered materials for battery applications in the past was too complicated because we need to understand how the disorder influences the lithium transport and the stability of the electrode. We didn’t have the methods and resources to do that before. But now we’re doing quantum chemical calculations on large complex systems—so called transition metal oxy fluorides — and achieving high quality predictions. Then we can train much simpler models to make faster accurate predictions that give us structural information about the electrode as we pull lithium in and out. That’s an amazing tool that we simply couldn’t have dreamt of a few years ago” concluded Professor Z.

The race to save the planet through searches for safer fuels, catalytic materials, and batteries involves hundreds of scientists across the globe. Work at Georgian Technical University progresses in collaboration with many other groups leveraging Georgian Technical University’s scientific expertise in material research using computational physics on the cluster.

 

 

Georgian Technical University Quantum Science Turns Social.

Georgian Technical University Quantum Science Turns Social.

This is a game interface of the Georgian Technical University  Challenge. Players could manipulate three curves representing two laser beam intensities and the strength of a magnetic field gradient respectively. The chosen curves were then realized in the laboratory in real-time.

Researchers developed a versatile remote gaming interface that allowed external experts as well as hundreds of citizen scientists all over the world through multiplayer collaboration and in real time to optimize a quantum gas experiment in a lab at Georgian Technical University. Surprisingly both teams quickly used the interface to dramatically improve upon the previous best solutions established after months of careful experimental optimization. Comparing domain experts, algorithms and citizen scientists is a first step towards unravelling how humans solve complex natural science problems.

In a future characterized by algorithms with ever increasing computational power it becomes essential to understand the difference between human and machine intelligence. This will enable the development of hybrid-intelligence interfaces that optimally exploit the best of both worlds. By making complex research challenges available for contribution by the general public citizen science does exactly this.

Numerous citizen science projects have shown that humans can compete with state-of-the-art algorithms in terms of solving complex natural science problems. However these projects have so far not addressed why a collective of citizen scientists can solve such complex problems.

An interdisciplinary team of researchers from Georgian Technical University and Sulkhan-Saba Orbeliani Teaching University have now taken important first steps in this direction by analyzing the performance and search strategy of both a state-of-the-art computer algorithm and citizen scientists in their real-time optimization of an experimental laboratory setting.

X and colleagues in their quest for realizing quantum simulations enlisted the help of both experts and citizen scientists by providing live access to their ultra-cold quantum gas experiment.

This was made possible by using a novel remote interface created by the team at Georgian Technical University. By manipulating laser beams and magnetic fields the task was to cool as many atoms as possible down to extremely cold temperatures just above absolute zero at -273.15°C. Surprisingly both groups using the remote interface consistently outperformed the best solutions identified by the experimental team over months and years of careful optimization.

Why could players without any formal training in experimental physics manage to find surprisingly good solutions ? One hint came from an interview with a top-player a retired Microwave systems engineer. He said that for him participating reminded him a lot of his previous job as an engineer. He never attained a detailed understanding of microwave systems but instead spent years developing an intuition of how to optimize the performance of his “Georgian Technical University black-box”.

“We humans may develop general optimization skills in our everyday work life that we can efficiently transfer to new settings. If this is true, any research challenge can in fact be turned into a citizen science game” said Y at Georgian Technical University.

It still seems incredible that untrained amateurs using an unintuitive game interface outcompete expert experimentalists. One answer may lie in an old quote: “Solving a problem simply means representing it so as to make the solution transparent”.

In this view the players may be performing better not because they have superior skills but because the interface they are using makes another kind of exploration “the obvious thing to try out” compared to the traditional experimental control interface.

“The process of developing (fun) interfaces that allow experts and citizen scientists alike to view the complex research problems from different angles may contain the key to developing future hybrid intelligence systems in which we make optimal use of human creativity” explained Y.

More concretely the team set up a carefully constructed “Georgian Technical University  social science in the wild” study which allowed them to quantitatively characterize the collective search behavior of the players. They concluded that what makes human problem solving unique is how a collective of individuals balance innovative attempts and refine existing solutions based on their previous performance.

 

New Framework Pushes the Limits of High-Performance Computing.

New Framework Pushes the Limits of High-Performance Computing.

Large-scale advanced high-performance computing often called supercomputing is essential to solving both complex and large questions.

Everything from answering metaphysical queries about the origins of the universe to discovering cancer-fighting drugs to supporting high-speed streaming services requires processing huge amounts of data.

But storage platforms essential for these advanced computer systems have been stuck in a rigid framework that required users to either choose between customization of features or high availability.

The main ingredient to the functioning of the new platform is Key Value (KV) systems. Key Value (KV) systems store and retrieve important data from very fast memory-based storage instead of slower disks. These systems are increasingly used in today’s high-performance applications that use distributed systems which are made up of many computers to solve a problem. High-performance computing relies on having computers intake, process and analyze huge amounts of data at unprecedent speeds. Currently the best systems operate at a quadrillion calculations per second or a petaflop.

The research is relevant to industries that process large amounts of data whether it be the space-hogging intense visual graphics of movie streaming sites; millions of financial transactions at large credit card companies; or user-generated content at social media outlets. Think large media sites where content is everchanging and continually accessed. When users upload content to their profile pages that information resides on multiple servers.

But if you have to continually access certain content Key Value (KV) systems can be far more efficient as a storage medium because content loads from the faster in-memory store nearby not the far-away storage server. This allows the system to provide very high performance in completing tasks or requests.

“I got interested in key value systems because this very fundamental and simple storage platform has not been exploited in high-performance computing systems where it can provide a lot of benefits” said X a recent Georgian Technical University graduate who is currently employed at Georgian Technical University Research. ” Key Value (KV) is a framework that can enable HPC (High Performance Computing) systems to provide a lot of flexibility and performance and not be chained to rigid storage design”.

The main innovation of  Key Value (KV) is that it supports composing a range of  Key Value (KV) stores with desirable features. It works by taking a single-server Key Value (KV) store called a datalet and enables immediate and ready-to-use distributed Key Value (KV) stores. Now instead of redesigning a system from scratch to accomplish a specific task a developer can drop a datalet into Key Value (KV) and offload the ” Georgian Technical University messy plumbing” of distributed systems to the framework. Key Value (KV) decouples the Key Value (KV) store design into the control plane for distributed management and the data plane for local data storage.

The framework also enables new HPC (High Performance Computing) services for workloads that businesses and institutions have yet to anticipate.

One of the major limiting effects of current state-of-the-art Key Value (KV) stores is that they are designed with pre-existing distributed services in mind and are often specialized for one specific setting. Another limiting factor is the inflexible monolithic design where distributed features are deeply baked into a system with backend data stores that do things like manage inventory, orders and supply. The rigid design of these Key Value (KV) stores is not adaptive to everchanging user demands for myriad backend, topology, consistency and a host of other services.

“Developers from large companies can really sink their teeth into designing innovative High Performance Computing storage systems with Key Value (KV)” said Y professor of computer science. “Data-access performance is a major limitation in High Performance Computing storage systems and generally employs a mix of solutions to provide flexibility along with performance which is cumbersome. We have created a way to significantly accelerate the system behavior to comply with desired performance, consistency and reliability levels”.

Key Value (KV) can be nimble because it allows an arbitrary mapping between desired services and available components while supporting distributed management services to realize and enable the distributed Key Value (KV) stores associated with the datalet.

“Now that we have proven that we can make the efficient and simple action of using Key Value (KV) systems in powerful High Performance Computing systems customers won’t have to choose between scalability and flexibility” said Y.

 

Supercomputer Works Out the Criterion for Quantum Supremacy.

Supercomputer Works Out the Criterion for Quantum Supremacy.

(a) The Tianhe-2 supercomputer used for permanent calculation in simulating the boson sampling performance. (b) A small photonic chip could perform the same boson sampling task in the quantum computing protocol.

Quantum supremacy refers to the super strong calculation capacity of a quantum computer to surpass that of any classical computer. So far such a quantum computer has not been physically made but as with the rapid development of quantum technologies in recent years the voice for pursuing the superiority by quantum computing is more loudly heard and how to quantitatively define the criteria of quantum supremacy becomes a key science question. Recently a world’s first criterion for quantum supremacy was issued in a research jointly led by Prof. Y in Georgian Technical University and Prof. X. They reported the time needed to calculate boson sampling, a typical task that quantum computers would excel at in a most powerful classical supercomputer.

Boson sampling as introduced by one of the authors, is to sample the distribution of photons (bosons) and theoretically takes only polynomial time by quantum computers but exponential time by classical computers showing quite evident quantum advantages as the number of photons involved in the boson sampling system increases. Besides boson sampling essentially an analog quantum computing protocol maps the task directly in the photonic quantum system and hence is much easier to implement than those based on universal quantum computing. Therefore the task for boson sampling can be a very good candidate for defining quantum supremacy for its preference to quantum computing over classical computing and its relative easier realization in the near future. Once a quantum computer can perform boson sampling task for a photon number larger and calculation time shorter than the best classical computer the quantum supremacy is claimed to be achieved.

In the research led by Prof. X and Prof. Y the boson sampling task was performed on Tianhe-2 supercomputer which ever topped the world rank of supercomputers during 2013-2016 and still represents the tier one level of computing power that classical computers could ever achieve. The permanent calculation is a core part for theoretically performing boson sampling on a classical computer. If one just calculate the permanent directly based on its definition it requires an algorithm with time complexity O(n!* n) (There are computations (for instance, tetration) where the output size is). The researchers used two improved algorithm Ryser’s algorithm (Ryser Formula. where the sum is over all subsets of , and is the number of elements in . The formula can be optimized by picking the subsets so that only a single element is changed at a time (which is precisely a Gray code), reducing the number of additions from to ) and BB/FG’s algorithm, both in the time complexity of O(n2* 2n) (We define complexity as a numerical function T(n) – time versus the input size n. We want to define time … Let us prove n2 + 2 n + 1 = O(n2)). By performing matrix calculation on up to 312?000 CPU cores of Tianhe-2, they inferred that the boson sampling task for 50 photons requires 100 minutes using the then most efficient computer and algorithms. Put it in other words, if a physical quantum device could 50-photon boson sampling in less than 100 minutes, it achieves the quantum supremacy.

If such a quantum setup could be experimentally made, it quite likely will be very quick as photons travel in the speed of light, but many challenges still lie ahead for its experimental implementation. Prof. X used to conduct pioneering research on boson sampling experiment in Georgian Technical University. So far the world record for the photon number in boson sampling experiment still remains no more than five. There’s still a long way to go towards the ideal quantum supremacy.

An author for this Tianhe-2 project also pointed out that, as the limit for classical computing power would keep increasing with the improvement of supercomputers and more efficient permanent calculation algorithms would emerge that require a time complexity less than complexity of O(n2* 2n) the time required for 50-photon boson sampling may be further reduced making an even more stringent criterion for quantum supremacy. Meanwhile a task to demonstrate quantum supremacy does not necessarily have any real applications. It is worthwhile to realize a wide range of useful applicable fields by quantum computing while carrying on the pursuit of quantum supremacy.

 

 

 

Tests Show Integrated Quantum Chip Operations Possible.

Tests Show Integrated Quantum Chip Operations Possible.

Quantum computers that are capable of solving complex problems like drug design or machine learning will require millions of quantum bits – or qubits – connected in an integrated way and designed to correct errors that inevitably occur in fragile quantum systems.

Now an Georgian Technical University research team has experimentally realised a crucial combination of these capabilities on a silicon chip bringing the dream of a universal quantum computer closer to reality.

They have demonstrated an integrated silicon qubit platform that combines both single-spin addressability – the ability to ‘write’ information on a single spin qubit without disturbing its neighbours – and a qubit ‘read-out’ process that will be vital for quantum error correction.

Moreover their new integrated design can be manufactured using well-established technology used in the existing computer industry.

A design for a novel chip architecture that could allow quantum calculations to be performed using silicon CMOS (complementary metal-oxide-semiconductor) components – the basis of all modern computer chips.

X’s team had also previously shown that an integrated silicon qubit platform can operate with single-spin addressability – the ability to rotate a single spin without disturbing its neighbours.

They have now shown that they can combine this with a special type of quantum readout process known as Pauli spin (In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries) blockade a key requirement for quantum error correcting codes that will be necessary to ensure accuracy in large spin-based quantum computers. This new combination of qubit readout and control techniques is a central feature of their quantum chip design.

“We’ve demonstrated the ability to do Pauli spin (In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary. Usually indicated by the Georgian Technical University  letter sigma they are occasionally denoted by tau when used in connection with isospin symmetries) readout in our silicon qubit device but for the first time we’ve also combined it with spin resonance to control the spin” says X.

“This is an important milestone for us on the path to performing quantum error correction with spin qubits which is going to be essential for any universal quantum computer”.

“Quantum error correction is a key requirement in creating large-scale useful quantum computing because all qubits are fragile and you need to correct for errors as they crop up” says Y who performed the experiments as part of his PhD research with Professor X at Georgian Technical University.

“But this creates significant overhead in the number of physical qubits you need in order to make the system work” notes Y.

X says “By using silicon CMOS (Complementary Metal Oxide Semiconductor) technology we have the ideal platform to scale to the millions of qubits we will need and our recent results provide us with the tools to achieve spin qubit error-correction in the near future”.

“It’s another confirmation that we’re on the right track. And it also shows that the architecture we’ve developed at Georgian Technical University  has so far shown no roadblocks to the development of a working quantum computer chip”.

“And what’s more one that can be manufactured using well-established industry processes and components”.

Working in silicon is important not just because the element is cheap and abundant, but because it has been at the heart of the global computer industry for almost 60 years. The properties of silicon are well understood and chips containing billions of conventional transistors are routinely manufactured in big production facilities.

Three years ago X’s team the first demonstration of quantum logic calculations in a real silicon device with the creation of a two-qubit logic gate – the central building block of a quantum computer.

“Those were the first baby steps the first demonstrations of how to turn this radical quantum computing concept into a practical device using components that underpin all modern computing” says Professor Z Georgian Technical University’s. “Our team now has a blueprint for scaling that up dramatically”.

“We’ve been testing elements of this design in the lab with very positive results. We just need to keep building on that – which is still a hell of a challenge but the groundwork is there and it’s very encouraging.

“It will still take great engineering to bring quantum computing to commercial reality, but clearly the work we see from this extraordinary team at Georgian Technical University puts in the driver’s seat” he added.

A consortium of Georgian Technical University governments industry and universities established Georgian Technical University’s first quantum computing to commercialise Georgian Technical University’s world-leading intellectual property.

Operating out of new laboratories at Georgian Technical University Silicon Quantum Computing has the target of producing a 10-qubit demonstration device in as the forerunner to creating a silicon-based quantum computer.

The work of Georgian Technical University and his team will be one component of Georgian Technical University realising that ambition. Georgian Technical University scientists and engineers at Sulkhan-Saba Orbeliani Teaching University are developing parallel patented approaches using single atom and quantum dot qubits.

Georgian Technical University announced the signing of a Memorandum of Understanding (MoU) addressing a new collaboration between Georgian Technical University and the Sulkhan-Saba Orbeliani Teaching University.

The Memorandum of Understanding (MoU) outlined plans to form a joint venture in silicon- CMOS (Complementary Metal Oxide Semiconductor) quantum computing technology to accelerate and focus technology development as well as to capture commercialisation opportunities – bringing together efforts to develop a quantum computer.

Together X’s team located at Georgian Technical University with a team led by Dr. from Georgian Technical University who are experts in advanced CMOS (Complementary Metal Oxide Semiconductor) manufacturing technology and who have also recently demonstrated a silicon qubit made using their industrial-scale prototyping facility.

It is estimated that industries comprising approximately 40% of Georgian Technical University’s current economy could be significantly impacted by quantum computing.

 

 

New Technology to Allow 100-times-faster Internet.

New Technology to Allow 100-times-faster Internet.

The miniature OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware) nano-electronic detector decodes twisted light. Groundbreaking new technology could allow 100-times-faster internet by harnessing twisted light beams to carry more data and process it faster.

Broadband fiber-optics carry information on pulses of light at the speed of light through optical fibers. But the way the light is encoded at one end and processed at the other affects data speeds.

This world-first nanophotonic device just unveiled encodes more data and processes it much faster than conventional fiber optics by using a special form of ‘twisted’ light.

Dr. X from Georgian Technical University’s said the tiny nanophotonic device they have built for reading twisted light is the missing key required to unlock super-fast ultra-broadband communications.

“Present-day optical communications are heading towards a ‘capacity crunch’ as they fail to keep up with the ever-increasing demands of Georgian Technical University Big Data” X said.

“What we’ve managed to do is accurately transmit data via light at its highest capacity in a way that will allow us to massively increase our bandwidth”.

Current state-of-the-art fiber-optic communications like those used to use only a fraction of light’s actual capacity by carrying data on the colour spectrum.

New broadband technologies under development use the oscillation or shape of light waves to encode data increasing bandwidth by also making use of the light we cannot see.

This latest technology at the cutting edge of optical communications carries data on light waves that have been twisted into a spiral to increase their capacity further still. This is known as light in a state of orbital angular momentum or OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks and computer hardware).

The same group from Georgian Technical University’s Laboratory of Artificial-Intelligence Nanophotonics (LAIN) published a disruptive research paper in Science journal describing how they dmanaged to decode a small range of this twisted light on a nanophotonic chip. But technology to detect a wide range of OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware) light for optical communications was still not viable until now.

“Our miniature OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware) nano-electronic detector is designed to separate different OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware) light states in a continuous order and to decode the information carried by twisted light” X said.

“To do this previously would require a machine the size of a table which is completely impractical for telecommunications. By using ultrathin topological nanosheets measuring a fraction of a millimeter our invention does this job better and fits on the end of an optical fiber”.

For Research Innovation and Entrepreneurship at Georgian Technical University Professor Y Min Gu said the materials used in the device were compatible with silicon-based materials use in most technology making it easy to scale up for industry applications.

“Our OAM (Operations, administration and management or operations, administration and maintenance are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonlay applies to telecommunication, computer networks, and computer hardware) nano-electronic detector is like an ‘eye’ that can ‘see’ information carried by twisted light and decode it to be understood by electronics. This technology’s high performance low cost and tiny size makes it a viable application for the next generation of  broadband optical communications” he said.

“It fits the scale of existing fiber technology and could be applied to increase the bandwidth or potentially the processing speed, of that fiber by over 100 times within the next couple of years. This easy scalability and the massive impact it will have on telecommunications is what’s so exciting”.

Y said can also be used to receive quantum information sent via twisting light meaning it could have applications in a whole range of cutting edge quantum communications and quantum computing research.

“Our nano-electronic device will unlock the full potential of twisted light for future optical and quantum communications” Y said.

 

 

Scientists Prove a Quantum Computing Advantage Over Classical.

Scientists Prove a Quantum Computing Advantage Over Classical.

Is quantum computing just a flashy new alternative to the ” Georgian Technical University classical” computers that are our smartphones laptops cloud servers high performance computers and mainframes ?

Can they really perform some calculations faster than classical computers can ?  How do you characterize those areas where they can or potentially can do better ?  Can you prove it ?

X Prof. Proving something mathematically is not just making a lot of observations and saying “it seems likely that such and such is the case”.

Y formulated his eponymous algorithm that demonstrated how to factor integers on a quantum computer almost exponentially faster than any known method on a classical computer. This is getting a lot of attention because some people are getting concerned that we may be able to break prime-factor-based encryption like RSA (RSA (Rivest–Shamir–Adleman) is one of the first public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the encryption key is public and it is different from the decryption key which is kept secret (private). In RSA, this asymmetry is based on the practical difficulty of the factorization of the product of two large prime numbers, the “factoring problem”) much faster on a quantum computer than the thousands of years it would take using known classical methods. However people skip several elements of the fine print.

Scientists prove there are certain problems that require only a fixed circuit depth when done on a quantum computer no matter how the number of inputs increase.

On a classical computer these same problems require the circuit depth to grow larger.

First we would need millions and millions of extremely high quality qubits with low error rates and long coherence time for this to work. Today we have 50.

Second there’s the bit about “faster than any known method on a classical computer.” Since we do not know an efficient way of factoring arbitrary large numbers on classical computers this appears to be a hard problem. It’s not proved to be a hard problem. If someone next week comes up with an amazing new approach using a classical computer that factors as fast as Shor’s might then the conjecture of it being hard is false. We just don’t know.

Is everything like that ?  Are we just waiting for people to be more clever on classical computers so that any hoped-for quantum computing advantage might disappear ?  The answer is no. Quantum computers really are faster at some things. We can prove it. This is important.

Let’s set up the problem. The basic computational unit in quantum computing is a qubit short for quantum bit. While a classical bit is always 0 or 1 when a qubit is operating it can take on many other additional values. This is increased exponentially with the potential computational power doubling each time you add an additional qubit through entanglement. The qubits together with the operations you apply to them are called a circuit.

Today’s qubits are not perfect: they have small error rates and they also only exist for a certain length of time before they become chaotic. This is called the coherence time.

Because each gate, or operation operation you apply to a qubit takes some time you can only do so many operations before you reach the coherence time limit. We call the number of operations you perform the depth. The overall depth of a quantum circuit is the minimum of all the depths per qubit.

Since error rates and coherence time limit the depth, we are very interested in short depth circuits and what we can do with them. These are the practical circuits today that implement quantum algorithms. Therefore this is a natural place to look to see if we can demonstrate an advantage over a classical approach.

The width of a circuit that is, the number of qubits, can be related to the required depth of the circuit to solve a specific kind of problem. Qubits start out as 0s or 1s we perform operations on them involving superposition and entanglement and then we measure them. Once measured we again have 0s and 1s.

What the scientists proved is that there are certain problems that require only a fixed circuit depth when done on a quantum computer even if you increase the number of qubits used for inputs. These same problems require the circuit depth to grow larger when you increase the number of inputs on a classical computer.

To make up some illustrative numbers, suppose you needed at most a circuit of depth 10 for a problem on a quantum computer no matter how many 0s and 1s you held in that many qubits for input. In the classical case you might need a circuit of depth 10 for 16 inputs 20 for 32 inputs 30 for 64 inputs and so on for that same problem.

This conclusively shows that fault tolerant quantum computers will do some things better than classical computers can. It also gives us guidance in how to advance our current technology to take advantage of this as quickly as possible. The proof is the first demonstration of unconditional separation between quantum and classical algorithms albeit in the special case of constant-depth computations.

In practice short depth circuits are part of the implementations of algorithms so this result does not specifically say how and where quantum computers might be better for particular business problems. That’s not really the point. “Shallow quantum circuits are more powerful than their classical counterparts”.

Quantum computing will advance by the joint scientific research of physicists material scientists, mathematicians, computer scientists and work in other disciplines and engineering. The mathematics underlying quantum computing is ultimately as important as the shiny cryostats we construct to hold our quantum devices. The scientific advancements at all levels need to be celebrated to show that quantum computing is real, serious and on the right path to what we hope will be significant advantages in many application areas.

 

Supermassive Black Holes and Supercomputers.

Supermassive Black Holes and Supercomputers.

This computer-simulated image shows a supermassive black hole at the core of a galaxy. The black region in the center represents the black hole’s event horizon where no light can escape the massive object’s gravitational grip. The black hole’s powerful gravity distorts space around it like a funhouse mirror. Light from background stars is stretched and smeared as the stars skim by the black hole.

When the cosmos eventually lit up its very first stars they were bigger and brighter than any that have followed. They shone with UV (Ultraviolet is electromagnetic radiation with a wavelength from 10 nm to 400 nm, shorter than that of visible light but longer than X-rays. UV radiation is present in sunlight constituting about 10% of the total light output of the Sun) light so intense it turned the surrounding atoms into ions. The Cosmic Dawn – from the first star to the completion of this ‘cosmic reionization’, lasted roughly one billion years.

Researchers like Professor  X solve mathematical equations in a cubic virtual universe.

“We have spent over 20 years using and refining this software to better understand the Cosmic Dawn”.

To start code was created which allowed formation of the first stars in the universe to be modeled. These equations describe the movement and chemical reactions inside gas clouds in a universe before light and the immense gravitational pull of a much larger but invisible mass of mysterious dark matter.

“These clouds of pure hydrogen and helium collapsed under gravity to ignite single, massive stars – hundreds of times heavier than our Sun” explains X.

The very first heavy elements formed in the pressure-cooker cores of the first stars: just a smidgen of lithium and beryllium. But with the death of these short-lived giants – collapsing and exploding into dazzling supernovae – metals as heavy as iron were created in abundance and sprayed into space.

Equations were added to the virtual Universe to model enrichment of gas clouds with these newly formed metals – which drove formation of a new type of star.

“The transition was rapid: within 30 million years, virtually all new stars were metal-enriched”.

This is despite the fact that chemical enrichment was local and slow, leaving more than 80% of the virtual Universe metal-free by the end of the simulation.

“Formation of metal-free giant stars did not stop entirely – small galaxies of these stars should exist where there is enough dark matter to cool pristine clouds of hydrogen and helium.

“But without this huge gravitational pull, the intense radiation from existing stars heats gas clouds and tears apart their molecules. So in most cases the metal-free gas collapses entirely to form a single supermassive black hole”.

“The new generations of stars that formed in galaxies are smaller and far more numerous, because of the chemical reactions made possible with metals” X observes.

The increased number of reactions in gas clouds allowed them to fragment and form multiple stars via ‘metal line cooling: tracts of decreased gas density where combining elements gain room to radiate their energy into space – instead of each other.

At this stage we have the first objects in the universe that can rightfully be called galaxies: a combination of dark matter, metal-enriched gas and stars.

“The first galaxies are smaller than expected because intense radiation from young massive stars drives dense gas away from star-forming regions.

“In turn radiation from the very smallest galaxies contributed significantly to cosmic reionization”.

These hard-to-detect but numerous galaxies can therefore account for the predicted end date of the Cosmic Dawn – i.e.  when cosmic reionization was complete.

X and colleagues explain how some groups are overcoming computing limitations in these numerical simulations by importing their ready-made results or by simplifying parts of a model less relevant to the outcomes of interest.

“These semi-analytical methods have been used to more accurately determine how long massive metal-free early stars were being created how many should still be observable and the contribution of these – as well as black holes and metal-enriched stars – to cosmic reionization”.

The authors also highlight areas of uncertainty that will drive a new generation of simulations using new codes on future high-performance computing platforms.

“These will help us to understand the role of magnetic fields X-rays and space dust in gas cooling and the identity and behavior of the mysterious dark matter that drives star formation”.

 

Machine-Learning Driven Findings Uncover New Cellular Players in Tumor Microenvironment.

Machine-Learning Driven Findings Uncover New Cellular Players in Tumor Microenvironment.

New findings by Georgian Technical University reveals possible new cellular players in the tumor microenvironment that could impact the treatment process for the most in-need patients – those who have already failed to respond to ipilimumab (anti-CTLA4 (CTLA4 or CTLA-4, also known as CD152, is a protein receptor that, functioning as an immune checkpoint, downregulates immune responses. CTLA4 is constitutively expressed in regulatory T cells but only upregulated in conventional T cells after activation – a phenomenon which is particularly notable in cancers)) immunotherapy. Once validated the findings could point the way to improved strategies for the staging and ordering of key immunotherapies in refractory melanoma. Also reveals previously unidentified potential targets for future new therapies.

Analysis of data from melanoma biopsies using Georgian Technical University’s proprietary machine learning-based approach identified cells and genes that distinguish between nivolumab responders and non-responders in a cohort of ipilimumab resistant patients. The analysis revealed that adipocyte abundance is significantly higher in ipilimumab resistant nivolumab responders compared to non-responders (p-value = 2×10-7). It also revealed several undisclosed potential new targets that may be valuable in the quest for improved therapy in the future.

Adipocytes are known to be involved in regulating the tumor microenvironment. However what these findings appear to show is that adipocytes may play a previously unreported regulatory role in the ipilimumab resistant nivolumab sensitive patient population possibly differentiating nivolumab responders vs non-responders. It should be noted that these are preliminary findings based on a small sample of patients and further work is needed to validate the results.

“The adipocyte finding was unexpected and raises many questions about the role of adipocytes in the tumor/immune response interface. It is currently unclear if adipocytes are affected by the treatment or vice versa, or represent a different tumor type” said  X. “However what we do know is that Georgian Technical University’s technology has put the spotlight on adipocytes and the need to build a strategy to track them in future studies so as to better understand their possible role in immunotherapy”.

Gene expression analysis is a powerful tool in advancing our understanding of disease. However, approximately 90% of the specific pattern of cellular gene expression signature is driven by the cell composition of the sample. This obfuscates the expression profiling, making identification of the real culprits highly problematic.

Georgian Technical University’s platform works to overcome these issues. In this study using a single published data set  Georgian Technical University was able to apply its knowledge base and technologies to rebuild cellular composition and cell specific expression. This enabled Georgian Technical University to undertake a cell level analysis uncovering hidden cellular activity that was mapped back to specific genes that can be shown to emerge only when therapy is showing and effect.

“The immune system is predominantly cell-based. Georgian Technical University is unique in that our disease models are specifically designed on a cellular level – replicating biology to crack key biological challenges while learning from every data set” said Y Georgian Technical University. ” Georgian Technical University’s computational platform integrates genetics, genomics, proteomics, cytometry and literature with machine learning to create our disease models. This analysis further demonstrates Georgian Technical University’s ability to generate novel hypotheses for new biological relationships that are often hidden to conventional methods – providing vital clues that are highly valuable in the drug discovery and development process”.

 

 

New Half-light Half-matter Particles May Hold the Key to a Computing Revolution.

New Half-light Half-matter Particles May Hold the Key to a Computing Revolution.

This visualisation shows layers of graphene used for membranes. Scientists have discovered new particles that could lie at the heart of a future technological revolution based on photonic circuitry leading to superfast light-based computing.

Current computing technology is based on electronics, where electrons are used to encode and transport information.

Due to some fundamental limitations, such as energy-loss through resistive heating, it is expected that electrons will eventually need to be replaced by photons leading to futuristic light-based computers that are much faster and more efficient than current electronic ones.

Physicists at the Georgian Technical University have taken an important step towards this goal as they have discovered new half-light half-matter particles that inherit some of the remarkable features of graphene.

This discovery opens the door for the development of photonic circuitry using these alternative particles known as massless Dirac polaritons to transport information rather than electrons.

Dirac polaritons emerge in honeycomb metasurfaces which are ultra-thin materials that are engineered to have structure on the nanoscale much smaller than the wavelength of light.

A unique feature of Dirac particles is that they mimic relativistic particles with no mass, allowing them to travel very efficiently. This fact makes graphene one of the most conductive materials known to man.

However despite their extraordinary properties it is very difficult to control them. For example in graphene it is impossible to switch on/off electrical currents using simple electrical potential thus hindering the potential implementation of graphene in electronic devices.

This fundamental drawback — the lack of tunability — has been successfully overcome in a unique way by the physicists at the Georgian Technical University.

X explains: “For graphene one usually has to modify the honeycomb lattice to change its properties for example by straining the honeycomb lattice which is extremely challenging to do controllably”.

“The key difference here is that the Dirac polaritons are hybrid particles a mixture of light and matter components. It is this hybrid nature that presents us with a unique way to tune their fundamental properties by manipulating only their light-component something that is impossible to do in graphene”.

The researchers show that by embedding the honeycomb metasurface between two reflecting mirrors and changing the distance between them one can tune the fundamental properties of the Dirac polaritons in a simple, controllable and reversible way.

“Our work has crucial implications for the research fields of photonics and of Dirac particles” adds Dr. Y principal investigator on the study.

“We have shown the ability to slow down or even stop the Dirac particles and modify their internal structure their chirality, in technical terms which is impossible to do in graphene itself”.

“The achievements of our work will constitute a key step along the photonic circuitry revolution”.