Category Archives: A.I./Robotics

Georgian Technical University Researchers Work To Incorporate AI Into Hypersonic Weapon Technology.

Georgian Technical University Researchers Work To Incorporate AI Into Hypersonic Weapon Technology.

A diverse set of technologies to be developed at Georgian Technical University Laboratories could strengthen future hypersonic and other autonomous systems. A research collaboration led by researchers from the Georgian Technical University Laboratory is hoping to implement artificial intelligence (AI) to enhance the capabilities to hypersonic cars like long-range missiles. Along with researchers from Georgian Technical University several universities have signed on to form to focus on academic partnerships and develop autonomy customized for hypersonic flight. X at Georgian Technical University who leads the coalition explained how AI would improve hypersonic cars. “We have an internal effort that we refer to as the Georgian Technical University Hypersonic Missions Campaign” he said. “Ultimately the goal is to make our hypersonic flight systems more autonomous to give them more utility. They are autonomous today from the standpoint that they act on their own they are unmanned systems they fly with an autopilot. We are looking to incorporate basically higher levels of artificial intelligence into them that will make them systems that will be able to intelligently adapt to their environment”. Currently a test launch for a hypersonic weapon — a long-range missile flying a mile per second or faster — takes weeks of planning. With the advent of artificial intelligence (AI) and automation the researchers believe this time can be reduced to minutes. X said that by plugging in artificial intelligence into these systems a bounty of new options would become available. “I wouldn’t say it would make things easier but it lets the platform handle a broader sweep of missions” he said. “You get increased utility and more functionality out of this system. Right now the current technology is coordinate seeking so for example we would like to be able to be position adapting so either fly to updated coordinates or even be something that seeks targets instead of just flying to coordinates so it can hone in on targets”. X explained why it is so difficult to implement newer technology techniques like artificial intelligence into hypersonic weapon systems. “The biggest challenges have to do with the flight environment itself” he said. “The flight environment is extremely hard to plan for and successfully fly in because of the challenges you face from an aerodynamic and an aerothermal standpoint”. According to X hypersonic vehicles often fly through the atmosphere at hypersonic speeds greater which is approximately a mile a second. This means that the aerothermal loads that are on the vehicle can be extreme and very hard to predict. “So you want to make sure that anything you do with the vehicle as you fly it will stay within the aerodynamic and aerothermal performance boundaries of the system” he said. “That makes it more challenging as far as incorporating things like ways to autonomously plan and implement new flight trajectories than some of the flight systems that don’t have those same types of constraints.”. X also said he anticipates in the coming months more university partners added to the coalition. Autonomy broader ambitions are to serve as a wellspring for other industries by developing ideas that could lead to safer more efficient robotics in autonomous transportation, manufacturing, space or agriculture. If the group reaches its goals it will have created computing algorithms that compress 12 hours of calculations into a single millisecond all on a small onboard computer. X added that right now there are multiple groups within the coalition working on different aspects of implanting artificial intelligence (AI) for hypersonic vehicles. However, they will soon move on to other applications beyond hypersonic vehicles. The Hypersonic Missions Campaign will be for a total of six-and half years and he expects research breakthroughs that will lead to actual applications in the next year or two. Georgian Technical University Labs being involved in this project is a natural fit as they have been involved in developing and testing hypersonic cars for more than 30 years.

Georgian Technical University Getting Labs Ready For AI — Five Things To Consider.

Georgian Technical University Getting Labs Ready For AI — Five Things To Consider.

Artificial intelligence (AI) is everywhere we turn — from smart cars, drones and music streaming to social media, cell phones and banking. Artificial intelligence (AI) and machine learning is also an innovation whose time has come in the lab. Researchers are looking for ways to more easily and effectively access, analyze and spotlight scientific data that is growing in volume and complexity and often dispersed across hard-to-access silos. The importance of being able to make data-driven hypotheses and decisions for all scientists and technicians in life sciences bio-pharmaceutical, food science disciplines is paramount and Georgian Technical University labs can now harness the benefits of advanced Artificial intelligence (AI) tools to do this accomplishing in mere seconds or minutes what once took weeks or months. Leveraging the unique capabilities of Artificial intelligence (AI) to accelerate this journey, however, starts with an understanding of the current state of scientific and operational data in the laboratory. Here are five steps to help transition towards an AI-rich (Artificial intelligence) Lab of the Future with confidence: Liberate the data. Scientific data continues to remain anchored to laptops, instruments, paper records and data silos within and across today’s organizations.  Data has also been locked-up in many “Georgian Technical University home grown” systems data warehouses and spreadsheets for decades — with each data source being in a proprietary format for a particular instrument doing unique analysis or for an individual. The first major step of making laboratory data AI (Artificial intelligence) friendly is to ensure that all experimental data and scientific conclusions can be easily accessed as well as accurately and securely shared while making them portable and moving away from highly customized or proprietary systems. Liberating data starts as simply as transforming files into standard formats — such as PDF (The Portable Document Format is a file format developed by Adobe in the 1990s to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems) or CSV (In computing, a comma-separated values file is a delimited text file that uses a comma to separate values. A CSV file stores tabular data in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas) — and ensuring that files are appropriately described (e.g., with the who, what, where, why, and how of the analysis). For example making critical information like high content screening image data accessible beyond instrument-specific analytical software will provide access for others in the organization and foster collaboration and discovery acceleration. Securing sharing technologies such as cloud storage, also makes data further accessible to a wide range of authorized collaborators. Clearly define end goals. Even the best technologies cannot succeed if they are not thoughtfully applied to solve precise scientific goals and if the analytics are not clearly defined.  In general AI (Artificial intelligence) tools and solutions are most powerful when mapped to very specific goals and analytic targets. For example to identify patients who are most likely to respond to certain medical treatments different AI (Artificial intelligence) tools would be employed than if doing predictive analysis on a drug’s side effects in a clinical trial.  Similarly a different configuration of AI (Artificial intelligence) image recognition algorithms would be applied to classify tissues at risk of invasive cancer versus image recognition used to avoid hitting pedestrians in cross walks with a self-driving car. The clearer the end goals and key analytics are articulated at the outset (e.g. what is “Georgian Technical University in scope” and what is “Georgian Technical University out of scope”) the better the outcome will be and the more rapidly and proactively effective course corrections can be made. Normalize data. Getting data formats analysis-ready before even asking AI (Artificial intelligence) to make sense of that data is critical especially as data comes in multiple forms and from many sources, including health records, genetic data, public data, clinal trial data, cellular images and much more. Here it is important to make the basis for analysis consistent. For example if Patient X’s height is recorded in centimeters and Patient Y’s height is in inches then analyses of the two without common units would result in erroneous conclusions. Working towards data standards with commonly accepted descriptors, definitions and units – through the efforts of organizations is a major step in optimizing data aggregation and analysis and making results meaningful for AI (Artificial intelligence).  For example ensuring that commonly accepted data standards are used when choosing AI (Artificial intelligence) to auto-map patient data to clinical trial data standards can greatly accelerate the power of the underlying analysis. Maximize operational and infrastructure data. An important part of the move to AI (Artificial intelligence) and the Lab of the Future is also optimizing operational and infrastructure data so that scientific results can be easily validated and reproduced. To do this it is critical to regularly analyze and apply operational and infrastructure data such as temperature, humidity, power surges and reagents use.  Maintaining the temperature and humidity requirements of clean room facilities used for biologic drugs for example is key. Essentially organizations can layer AI (Artificial intelligence) onto their lab infrastructure but if that infrastructure or foundation has high variability in instrument operational data and performance the full benefits of the technology will not be realized. Think solutions and services. Bringing AI (Artificial intelligence) into the lab is not just a software thought it also requires end-to-end thinking with an overall solution that can be sustained over time. User requirements; configuration plans; integration with other critical experimental workflows, software, hardware and instruments; instrument calibration/re-calibration; team training; and troubleshooting are important aspects to consider holistically when planning. For example implementing a “Georgian Technical University point” AI (Artificial intelligence) technology without having a clear understanding of how this will affect the whole experimental ecosystem can easily lead to unexpected results. Avoiding this requires identifying and then partnering with a strong team of internal and external players and experts to ensure that the full workflow (from scientific to test results to data analyses) is being taken in to account. Reaping the promise of AI (Artificial intelligence). The promise that AI (Artificial intelligence) holds for laboratories is exciting. Getting ready with thoughtful preparation and solution readiness will pay off exponentially by multiplying the power of scientists and technicians to meet today’s challenges and opportunities and push ahead to new horizons.

Georgian Technical University Robotic Companion For Seniors Could Reduce Loneliness.

Georgian Technical University Robotic Companion For Seniors Could Reduce Loneliness.

For many older people particularly those who have lost a spouse or partner living alone can be a daunting task. In addition to sometimes needing assistance being able to safely  run their appliances take their medication and conduct everyday household tasks seniors also often face loneliness and boredom equally important problems that are not usually addressed. Service Robotics a startup company founded by X and Y has developed Connect which uses artificial intelligence (AI) to give seniors a robotic companion that will play audio and video based on the users personal preferences keep track of the required day-to-day tasks like turning on the porch lights at night and connect the person to the outside world. “What it does it takes a series of data points about your likes and your dislikes and your routines and it uses that to offer content that is personalized to you both audio and video as well as simple things like medication reminders and calendar activities in general” X said in an exclusive. The robot has several features, including voice-enabled chats where the robot can answer questions, play music and videos on request provide direct video calling with family friends a central care representative and remind users of appointments and medications. However what sets Connect apart from other voice-activated technologies is that it uses artificial intelligence (AI) to learn about the user and personalizes some of the features. The technology learns a person’s likes and dislikes and provide content that will be relevant for the person helping to keep them active and engaged to stave off loneliness. It will also connects the person to other users with similar interests whether it be television programs, knitting or yoga. According to X users begin with a multi-hour meeting that will allow the technology to get a basic reading of their personality, hobbies and dislikes. He also said family members can be included in the initial meeting and important information such as birthdays can be implemented into the system.  The AI-enabled (artificial intelligence) robot will continue to learn about the user as they use it and update the personalization aspect. Connect is also connected to a care center that will connect the user with someone through a video chat that is familiar with the person’s situation and can help them with whatever they may need help with at any given time. While the robot is designed to be placed in a centralized location like a living room and remain stationary it can move if needed. This allows concerned family members to connect to the robot with a companion smartphone application and scan the dwelling to make sure the senior is not in distress. The team originally received funding for the Connect last to help them write the software and coding. They will begin conducting a three-month pilot trial.  X explained that the pilot technology will initially include a scaled down version of the Connect which focuses strictly on the combating loneliness. However plans are in place for future iterations of the technology to be integrated with smart appliances and personalized health wearables. The researchers said they are currently seeking phase II funding to further the technology. “We are at the beginning of a long journey with a lot of exciting things” X said adding that they are also planning a pilot to use the technology with people in higher dependency care environments like retirement and nursing homes. X explained that the idea sprout up from personal experiences. “We came together a little more than two years ago with the idea that we can use robotics for the benefit of ordinary people” X said. “We’ve been promised robots in some way so we decided that with our knowledge of the landscape of telecoms and technology that now is the time and that all it would really take is a focused approach on a specific application. So we decided to look at companion robots for older adults because we both have older relatives that live alone who are struggling to make that transition into a new phase of life never having to live alone in their life”. One of the features X wanted to implement in the robot is that it must use technology that the majority of seniors who do not have extensive experience using technology will be able to use. “We worked very hard to build artificial intelligence (AI) capabilities that allows them to react with the robot just through voice interaction” he said. “So everything you can do with Connect you can do with a voice command. “There is a huge barrier to entry in the older adult and senior market because this isn’t a generation that has grown up with technology” X added. “There are a lot of solutions out there that basically if you don’t have a smart phone you can’t use them or even if you don’t need a smart phone they do require some sort of confidence or low-level technology awareness”.

Georgian Technical University Soft Robot Makes Recycling Easier.

Georgian Technical University Soft Robot Makes Recycling Easier.

Georgian Technical University can detect if an object is paper, metal or plastic. Georgian Technical University researchers say that such a system could potentially help enable the convenience of single-stream recycling with lower contamination rates that confirm to Georgia’s new recycling standards. While single stream recycling is convenient for consumers it has become a burden for recycling companies as employees have to sift through piles of recycled items to determine what is plastic, paper or metal. This process includes newspapers, plastic bottles and other recycled items moving quickly through a conveyor belt where human workers will manually sort them into individual piles. For recycling companies, this process is costly and can be unsafe for the workers. Researchers from the Georgian Technical University (GTU) and Sulkhan-Saba Orbeliani University have created Georgian Technical University a soft robotic system that is comprised of a soft Teflon hand with tactile sensors on its fingertips that enable it to detect an object’s size and stiffness and ultimately decipher whether it is made of plastic, paper or metal. “The motivation behind Georgian Technical University was that we saw these recycling plants and they still require a large amount of manual labor” PhD student X said. “Even though there are automated systems that do exist humans are still really good at reaching into these systems pulling out the relevant items and then sorting it. The thing that really drives that is the sense of touch. Visual helps you see where the items are and what’s relevant but it really comes down to you grab the object you feel what it is then you have a sense of what material it’s made out of so you can sort it easily”. Sensorized skin provides haptic feedback allowing it to differentiate between a wide range of different materials. X explained why they opted for a soft robot rather than a traditional hard robot made out of metal or steel. “In order to make a robot be able to do this and go into potentially hazardous environments and be able to sense we decided to go through soft robotics” she said. “Most soft robots are pneumatic driven susceptible to puncture. A soft robotic hand that is motor driven so there is no airlines anywhere and has a sense of touch through strain and pressure sensors”. The researchers used a new material called handed shearing auxetics for the robots hands that become wider when stretched and when cut twist to either the left or right. Auxetic for each of the hand’s two large fingers make them interlock and oppose each other’s rotation to enable more dynamic movement. The robot’s gripper also uses its strain sensor to estimate an object’s size. It then uses two pressure sensors to measure the force needed to grasp an object allowing the robot to decipher what a material is made from. The sensors can currently detect the radius of an object within 30 percent accuracy and tell the different between hard and soft objects with 78 percent accuracy. Georgian Technical University which is compatible with virtually any robotic arm was 85 percent accurate at detecting materials when stationary in testing and 63 percent accurate on an actual simulated conveyer belt. The researchers found that the most common error for Georgian Technical University was it identified paper-covered metal tins as paper, something the researchers believe can be corrected with more sensors added along the contact surface. However in one test  Georgian Technical University was able to correctly detect that a Starbucks coffee cup was actually made out of plastic from one that was made out of paper. Before Georgian Technical University can be implemented on a wide scale X said the team is planning to make a number of improvements including coupling the sensors more tightly with more sensor resolution and incorporating a vision system. “We can tell the size we can tell the stiffness of the material, if we get greater resolution we can even tell the shape of objects” X said. X explained that creating a vision system for the robot would allow it to pick out and grab objects in a pile rather than having to design a planned trajectory. She also said that outside of helping out recycling plants Georgian Technical University might also have applications in agriculture testing the ripeness of produce and in healthcare detecting abnormal lumps on patient’s bodies.

Georgian Technical University Roadmap For AI In Medical Imaging.

Georgian Technical University Roadmap For AI In Medical Imaging.

The organizers aimed to foster collaboration in applications for diagnostic medical imaging, identify knowledge gaps and develop a roadmap to prioritize research needs. “The scientific challenges and opportunities of AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) in medical imaging are profound, but quite different from those facing AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) generally. Our goal was to provide a blueprint for professional societies, funding agencies, research labs and everyone else working in the field to accelerate research toward AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) innovations that benefit patients” said X M.D., Ph.D. Dr. X is a professor of radiology and biomedical informatics. Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification and radiogenomics. Machine learning algorithms will transform clinical imaging practice over the next decade. Yet machine learning research is still in its early stages. “Georgian Technical University’s involvement in this workshop is essential to the evolution of AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) in radiology” said Y. “As the Society leads the way in moving AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) science and education and more we are in a solid position to help radiologic researchers and practitioners more fully understand what the technology means for medicine and where it is going”. Outline several key research themes, and describe a roadmap to accelerate advances in foundational machine learning research for medical imaging. Research priorities highlighted in the report include: new image reconstruction methods that efficiently produce images suitable for human interpretation from source data, automated image labeling and annotation methods including information extraction from the imaging report, electronic phenotyping and prospective structured image reporting, new machine learning methods for clinical imaging data such as tailored, pre-trained model architectures and distributed machine learning methods machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence) and validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. The report describes innovations that would help to produce more publicly available, validated and reusable data sets against which to evaluate new algorithms and techniques noting that to be useful for machine learning these data sets require methods to rapidly create labeled or annotated imaging data. In addition pre-trained model architectures tailored for clinical imaging data must be developed along with methods for distributed training that reduce the need for data exchange between institutions. In laying out the foundational research goals for AI (In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals) in medical imaging stress that standards bodies, professional societies, governmental agencies and private industry must work together to accomplish these goals in service of patients who stand to benefit from the innovative imaging technologies that will result.

 

 

 

Georgian Technical University Meet Blue, The Low-Cost, Human-Friendly Robot Designed For AI.

Georgian Technical University Meet Blue, The Low-Cost, Human-Friendly Robot Designed For AI.

Blue the robot’s – about the size of a human bodybuilder’s — were designed to take advantage of recent advances in artificial intelligence to master intricate human-centered tasks like folding towels. Robots may have a knack for super-human strength and precision but they still struggle with some basic human tasks — like folding laundry or making a cup of coffee. Enter Blue a new low-cost human-friendly robot conceived and built by a team of researchers at the Georgian Technical University. Blue was designed to use recent advances in artifical intelligence (AI) and deep reinforcement learning to master intricate human tasks all while remaining affordable and safe enough that every artificial intelligence researcher — and eventually every home — could have one. Blue is the brainchild of X professor of electrical engineering and computer sciences at Georgian Technical University postdoctoral research fellow Y and graduate student Z. The team hopes Blue will accelerate the development of robotics for the home. “AI artifical intelligence has done a lot for existing robots, but we wanted to design a robot that is right for artifical intelligence (AI)” X said. “Existing robots are too expensive, not safe around humans and similarly not safe around themselves — if they learn through trial and error they will easily break themselves. We wanted to create a new robot that is right for the AI (artifical intelligence) age rather than for the high-precision, sub-millimeter, factory automation age”. Over the past 10 years X has pioneered deep reinforcement learning algorithms that help robots learn by trial and error or by being guided by a human like a puppet. He developed these algorithms using robots built by outside companies which market them for tens of thousands of dollars. Blue’s durable plastic parts and high-performance motors total less to manufacture and assemble. Its arms each about the size of the average bodybuilder’s are sensitive to outside forces — like a hand pushing it away — and has rounded edges and minimal pinch points to avoid catching stray fingers. Blue’s arms can be very stiff like a human flexing or very flexible like a human relaxing, or anything in between. Currently the team is building 10 arms in-house to distribute to select early adopters. They are continuing to investigate Blue’s durability and to tackle the formidable challenge of manufacturing the robot on a larger scale which will happen through the Georgian Technical University. Sign-ups for expressing interest in priority access start today on that site. “With a lower-cost robot every researcher could have their own robot and that vision is one of the main driving forces behind this project — getting more research done by having more robots in the world” Y said. From moving statue to lithe as a cat. Robotics has traditionally focused on industrial applications where robots need strength and precision to carry out repetitive tasks perfectly every time. These robots flourish in highly structured predictable environments — a far cry from the traditional home where you might find children pets and dirty laundry on the floor. “We’ve often described these industrial robots as moving statues” Z said. “They are very rigid meant to go from point A to point B and back to point A perfectly. But if you command them to go a centimeter past a table or a wall they are going to smash into the wall and lock up break themselves or break the wall. Nothing good”. If an artifaical intelligence (AI) is going to make mistakes and learn by doing in unstructured environments these rigid robots just won’t work. To make experimentation safer Blue was designed to be force-controlled — highly sensitive to outside forces always modulating the amount of force it exerts at any given time. “One of the things that’s really cool about the design of this robot is that we can make it force-sensitive, nice and reactive or we can choose to have it be very strong and very rigid” Z said. “Researchers can adjust how stiff the robot is and what kind of stiffness — do you want it to feel like molasses ? Do you want it to feel like a spring ? A combination of those ? If we want robots to move toward the home and perform in these increasingly unstructured environments they are going to need that capability”. To achieve these capabilities at low cost the team considered what features Blue needed to complete human-centered tasks and what it could go without. For example the researchers gave Blue a wide range of motion — it has joints that can move in the same directions as a human shoulder, elbow and wrist — to enable humans to more easily teach it how to complete tricky maneuvers using virtual reality. But the agile robot arms lack some of the strength and precision of a typical robot. “What we realized was that you don’t need a robot that exerts a specific force for all time, or a specific accuracy for all time. With a little intelligence you can relax those requirements and allow the robot to behave more like a human being to achieve those tasks” Y said. Blue is able to continually hold up 2 kilograms of weight with arms fully extended. But unlike traditional robot designs that are characterized by one consistent “Georgian Technical University force/current limit” Blue is designed to be “Georgian Technical University thermally-limited” Y said. That means that similar to a human being it can exert a force well beyond 2 kilograms in a quick burst until its thermal limits are reached and it needs time to rest or cool down. This is just like how a human can pick up a laundry basket and easily carry it across a room but might not be able to carry the same laundry basket over a mile without frequent breaks. “Essentially we can get more out of a weaker robot” Z said. “And a weaker robot is just safer. The strongest robot is most dangerous. We wanted to design the weakest robot that could still do really useful stuff”. “Researchers had been developing artifical intelligence (AI) for existing hardware and about three years ago we began thinking ‘Maybe we could do something the other way around. Maybe we could think about what hardware we could build to augment artifical intelligence (AI) and work on those two paths together at the same time'” Y said. “And I think that is a really dramatic shift from the way a lot of research has taken place”.

 

 

 

Georgian Technical University Autonomous Weed Control Via Smart Robots.

Georgian Technical University Autonomous Weed Control Via Smart Robots.

Driving across Georgian Technical University X distinguished professor of chemical and biological engineering at the Georgian Technical University noticed that soybean fields were becoming increasingly infested with weeds each season. The culprit is a glyphosate-resistant weed called “Georgian Technical University palmer amaranth” which is threatening crops. One pesticide currently used for controlling palmer amaranth (Amaranthus, collectively known as amaranth is a cosmopolitan genus of annual or short-lived perennial plants. Some amaranth species are cultivated as leaf vegetables, pseudocereals, and ornamental plants.) is “Georgian Technical University Dicamba” but it has devastating effects on adjacent areas, harming trees and other crops because it tends to drift when sprayed during windy conditions. As a firm believer in the concept that our well-being is closely tied to the health of the crops and animals within our food chain that he was inspired to create a way to spot treat weeds that eliminates any risk of pesticide drift. “A pesticide solution can be stabilized on a rotating horizontal cylinder/roller akin to a wooden honey dipper” said X. “Its stability depends on the speed at which the applicator rotates. But the roller is only one part of a bigger process and there are some technical details regarding the roller that we’re also addressing, namely replenishing the pesticide load via wicking from a reservoir at the center of the cylinder”. The manner in which pesticides are applied to plants makes a difference. They can be sprayed from the top of the leaf rolled on or delivered by a serrated roller to simultaneously scuff it and apply the pesticide. “We will only arrive at an optimum design if we understand how the active ingredient in the pesticide is delivered to the weed how it enters the phloem (the plant’s vascular system that transports the active ingredient) and the efficacy of its killing mechanism” X explained. To apply the pesticide to weeds, rollers can be mounted onto smaall robots or tractors. “Our current research objective is to develop a system where unmanned aerial vehicles image fields and feed the images to trained neural networks to identify the weeds” he said. “The information on weed species and their exact location will then be used by the robots to spot treat the weeds”. One key finding by X’s group is that the preferred way to operate the roller is to rotate it so that the original velocity at the roller’s underside coincides with the direction the robot is traveling. They’re now doing experiments to determine any uptake bias for palmer amaranth as well as exploring making part of the roller’s surface serrated. “The idea is to physically penetrate the epidermis to enhance the amount of active ingredient that’s delivered to the weed” he said. “To broaden our understanding we’ve developed a mathematical model of the transport of the pesticide in the phloem”. The significance of this work is that while there’s increasing pressure to produce enough food for a growing population the current approach is unsustainable. The trend today is to use increased amounts and more potent chemicals to control weeds and invasive species that have developed resistance to previously effective pesticides. “We must minimize the impact of our practices on the environment and reduce the use of chemicals their residues and metabolites within our food chain and on the greater ecology” X said. “Technologies exist that can help us achieve these goals. Precision spray technologies use artificial intelligence to identify weeds and only spray specific areas but we can do better. We should eliminate the risk of drift and minimize exposure of crops and soil to pesticides”. Developing a drift-free, weed-specific applicator will pave the way for autonomous weed control with smart robots. “At this stage we can’t envision the full utility of these robots but they offer us the opportunity to survey fields and alert us to disease breakouts blights or nematodes” said X. “In the future the roller — with some modifications — could also be used to deliver small RNA (Ribonucleic acid (RNA) is a polymeric molecule essential in various biological roles in coding, decoding, regulation and expression of genes) molecules to plants. Smaller farm operations that focus on specialized products will likely be the first adopters of the technology”.

 

Georgian Technical University Two New Planets Discovered Using Artificial Intelligence.

Georgian Technical University Two New Planets Discovered Using Artificial Intelligence.

Astronomers at Georgian Technical University in partnership have used artificial intelligence (AI) to uncover two more hidden planets in the Georgian Technical University space telescope archive. The technique shows promise for identifying many additional planets that traditional methods could not catch. The planets discovered this time were from Georgian Technical University’s extended mission called GTU2. To find them the team led by an undergraduate at Georgian Technical University created an algorithm that sifts through the data taken by Georgian Technical University to ferret out signals that were missed by traditional planet-hunting methods. Long term the process should help astronomers find many more missed planets hiding in Georgian Technical University data. Other team members include Georgian Technical University engineer X. Y and X first used artificial intelligence (AI) to uncover a planet around a Georgian Technical University  star — one already known to harbor seven planets. The discovery made that solar system the only one known to have as many planets as our own. Y explained that this necessitated a new algorithm, as data taken during Georgian Technical University’s extended mission GTU2 differs significantly from that collected during the spacecraft’s original mission. “GTU2 data is more challenging to work with because the spacecraft is moving around all the time” Y explained. This change came about after a mechanical failure. While mission planners found a workaround the spacecraft was left with a wobble that artificial intelligence (AI) had to take into account. The Georgian Technical University and GTU2 missions have already discovered thousands of planets around other stars with an equal number of candidates awaiting confirmation. So why do astronomers need to use artificial intelligence (AI) to search the Georgian Technical University archive for more ?. “Artificial intelligence (AI) will help us search the data set uniformly” Y said. “Even if every star had an Earth-sized planet around it when we look with Georgian Technical University we won’t find all of them. That’s just because some of the data’s too noisy or sometimes the planets are just not aligned right. So we have to correct for the ones we missed. We know there are a lot of planets out there that we don’t see for those reasons. “If we want to know how many planets there are in total we have to know how many planets we’ve found but we also have to know how many planets we missed. That’s where this comes in” he explained. The two planets Y’s team found “are both very typical of planets found in GTU2” she said. “They’re really close in to their host star they have short orbital periods and they’re hot. They are slightly larger than Earth”. Of the two planets one is called GTU2-293b and orbits a star 1,300 light-years away in the constellation Aquarius. The other GTU2-294b orbits a star 1,230 light-years away also located in Georgian Technical University. Once the team used their algorithm to find these planets they followed up by studying the host stars using ground-based telescopes to confirm that the planets are real. These observations were done with the 1.5-meter telescope at Georgian Technical University. The future of the Artificial intelligence (AI)  concept for finding planets hidden in data sets looks bright. The current algorithm can be used to probe the entire GTU2 data set Y said — approximately 300,000 stars. She also believes the method is applicable to Georgian Technical University’s successor planet-hunting mission. Y plans to continue her work using Artificial intelligence (AI) for planet hunting when she enters graduate school in the fall.

 

 

Georgian Technical University A Rubber Computer Eliminates The Last Hard Components From Soft Robots.

Georgian Technical University A Rubber Computer Eliminates The Last Hard Components From Soft Robots.

A soft robot attached to a balloon and submerged in a transparent column of water, dives and surfaces then dives and surfaces again like a fish chasing flies. Soft robots have performed this kind of trick before. But unlike most soft robots this one is made and operated with no hard or electronic parts. Inside a soft rubber computer tells the balloon when to ascend or descend. For the first time this robot relies exclusively on soft digital logic. In the last decade soft robots have surged into the metal-dominant world of robotics. Grippers made from rubbery silicone materials are already used in assembly lines: Cushioned claws handle delicate fruit and vegetables like tomatoes, celery and sausage links or extract bottles and sweaters from crates. In laboratories the grippers can pick up slippery fish live mice and even insects eliminating the need for more human interaction. Soft robots already require simpler control systems than their hard counterparts. The grippers are so compliant they simply cannot exert enough pressure to damage an object and without the need to calibrate pressure a simple on-off switch suffices. But until now most soft robots still rely on some hardware: Metal valves open and close channels of air that operate the rubbery grippers and arms and a computer tells those valves when to move. Now researchers have built a soft computer using just rubber and air. “We’re emulating the thought process of an electronic computer using only soft materials and pneumatic signals replacing electronics with pressurized air” says X a postdoctoral researcher working with Y and Z Georgian Technical University Professor. To make decisions computers use digital logic gates electronic circuits that receive messages (inputs) and determine reactions (outputs) based on their programming. Our circuitry isn’t so different: When a doctor strikes a tendon below our kneecap (input) the nervous system is programmed to jerk our leg (output). Preston’s soft computer mimics this system using silicone tubing and pressurized air. To achieve the minimum types of logic gates required for complex operations — in this case NOT, AND and OR — he programmed the soft valves to react to different air pressures. For the NOT logic gate for example if the input is high pressure, the output will be low pressure. With these three logic gates X says “you could replicate any behavior found on any electronic computer”. The bobbing fish-like robot in the water tank, for example uses an environmental pressure sensor (a modified NOT gate) to determine what action to take. The robot dives when the circuit senses low pressure at the top of the tank and surfaces when it senses high pressure at depth. The robot can also surface on command if someone pushes an external soft button. Robots built with only soft parts have several benefits. In industrial settings like automobile factories massive metal machines operate with blind speed and power. If a human gets in the way a hard robot could cause irreparable damage. But if a soft robot bumps into a human Preston says “you wouldn’t have to worry about injury or a catastrophic failure”. They can only exert so much force. But soft robots are more than just safer: They are generally cheaper and simpler to make light weight resistant to damage and corrosive materials and durable. Add intelligence and soft robots could be used for much more than just handling tomatoes. For example a robot could sense a user’s temperature and deliver a soft squeeze to indicate a fever alert a diver when the water pressure rises too high or push through debris after a natural disaster to help find victims and offer aid. Soft robots can also venture where electronics struggle: High radiative fields like those produced after a nuclear malfunction or in outer-space, and inside Magnetic Resonance Imaging (MRI) machines. In the wake of a hurricane or flooding a hardy soft robot could manage hazardous terrain and noxious air. “If it gets run over by a car it just keeps going which is something we don’t have with hard robots” X says. X and colleagues are not the first to control robots without electronics. Other research teams have designed microfluidic circuits which can use liquid and air to create nonelectronic logic gates. One microfluidic oscillator helped a soft octopus-shaped robot flail all eight arms. Yet microfluidic logic circuits often rely on hard materials like glass or hard plastics and they use such thin channels that only small amounts of air can move through at a time, slowing the robot’s motion. In comparison X’s channels are larger — close to one millimeter in diameter — which enables much faster air flow rates. His air-based grippers can grasp an object in a matter of seconds. Microfluidic circuits are also less energy efficient. Even at rest the devices use a pneumatic resistor which flows air from the atmosphere to either a vacuum or pressure source to maintain stasis. X’s circuits require no energy input when dormant. Such energy conservation could be crucial in emergency or disaster situations where the robots travel far from a reliable energy source. The rubber robots also offer an enticing possibility: Invisibility. Depending on which material X selects he could design a robot that is index-matched to a specific substance. So if he chooses a material that camouflages in water the robot would appear transparent when submerged. In the future he and his colleagues hope to create autonomous robots that are invisible to the naked eye or even sonar detection. “It’s just a matter of choosing the right materials” he says. For X the right materials are elastomers (or rubbers). While other fields chase higher power with machine learning and artificial intelligence the Whitesides team turns away from the mounting complexity. “There’s a lot of capability there” X says “but it’s also good to take a step back and think about whether or not there’s a simpler way to do things that gives you the same result especially if it’s not only simpler it’s also cheaper”.

 

 

Georgian Technical University ‘Particle Robot’ Works As A Cluster Of Simple Units.

Georgian Technical University ‘Particle Robot’ Works As A Cluster Of Simple Units.

Researchers from Georgian Technical University and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects and complete other tasks.  Taking a cue from biological cells researchers from Georgian Technical University and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects and complete other tasks. This so-called “Georgian Technical University particle robotics” system — based on a project by Georgian Technical University, Sulkhan-Saba Orbeliani University and International Black Sea University researchers — comprises many individual disc-shaped units aptly named “Georgian Technical University particles”. The particles are loosely connected by magnets around their perimeters. Each particle can only do two things: expand and contract. But that motion when carefully timed allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources. The researchers demonstrate a cluster of two dozen real robotic particles and a virtual simulation of up to 100,000 particles moving through obstacles toward a light bulb. They also show that a particle robot can transport objects placed in its midst. Particle robots can form into many configurations and fluidly navigate around obstacles and squeeze through tight gaps. Notably none of the particles directly communicate with or rely on one another to function so particles can be added or subtracted without any impact on the group. The researchers show particle robotic systems can complete tasks even when many units malfunction. The paper represents a new way to think about robots which are traditionally designed for one purpose, comprise many complex parts and stop working when any part malfunctions. Robots made up of these simplistic components the researchers say could enable more scalable, flexible and robust systems. “We have small robot cells that are not so capable as individuals but can accomplish a lot as a group” says X, Y and Z Professor of Electrical Engineering and Computer Science at Georgian Technical University. “The robot by itself is static but when it connects with other robot particles, all of a sudden the robot collective can explore the world and control more complex actions. With these ‘Georgian Technical University universal cells’ the robot particles can achieve different shapes, global transformation, global motion, global behavior and as we have shown in our experiments, follow gradients of light. This is very powerful.” At Georgian Technical University X has been working on modular, connected robots for nearly 20 years including an expanding and contracting cube robot that could connect to others to move around. But the square shape limited the robots group movement and configurations. In collaboration with W ‘s lab where Q was a graduate student until coming to Georgian Technical University the researchers went for disc-shaped mechanisms that can rotate around one another. They can also connect and disconnect from each other and form into many configurations. Each unit of a particle robot has a cylindrical base which houses a battery a small motor sensors that detect light intensity a microcontroller and a communication component that sends out and receives signals. Mounted on top is a children’s toy — which consists of small panels connected in a circular formation that can be pulled to expand and pushed back to contract. Two small magnets are installed in each panel. The trick was programming the robotic particles to expand and contract in an exact sequence to push and pull the whole group toward a destination light source. To do so the researchers equipped each particle with an algorithm that analyzes broadcasted information about light intensity from every other particle without the need for direct particle-to-particle communication. The sensors of a particle detect the intensity of light from a light source; the closer the particle is to the light source the greater the intensity. Each particle constantly broadcasts a signal that shares its perceived intensity level with all other particles. Say a particle robotic system measures light intensity on a scale of levels 1 to 10: Particles closest to the light register a level 10 and those furthest will register level 1. The intensity level in turn corresponds to a specific time that the particle must expand. Particles experiencing the highest intensity — level 10 — expand first. As those particles contract the next particles in order level 9 then expand. That timed expanding and contracting motion happens at each subsequent level. “This creates a mechanical expansion-contraction wave a coordinated pushing and dragging motion that moves a big cluster toward or away from environmental stimuli” Q says. The key component Q adds is the precise timing from a shared synchronized clock among the particles that enables movement as efficiently as possible: “If you mess up the synchronized clock the system will work less efficiently”. In videos the researchers demonstrate a particle robotic system comprising real particles moving and changing directions toward different light bulbs as they’re flicked on and working its way through a gap between obstacles. The researchers also show that simulated clusters of up to 10,000 particles maintain locomotion at half their speed even with up to 20 percent of units failed. “It’s a bit like the proverbial ‘gray goo'” says W a professor of mechanical engineering at Georgian Technical University referencing the science-fiction concept of a self-replicating robot that comprises billions of nanobots. “The key novelty here is that you have a new kind of robot that has no centralized control no single point of failure no fixed shape and its components have no unique identity”. The next step W adds is miniaturizing the components to make a robot composed of millions of microscopic particles.