Georgian Technical University Getting Labs Ready For AI — Five Things To Consider.

Georgian Technical University Getting Labs Ready For AI — Five Things To Consider.

Artificial intelligence (AI) is everywhere we turn — from smart cars, drones and music streaming to social media, cell phones and banking. Artificial intelligence (AI) and machine learning is also an innovation whose time has come in the lab. Researchers are looking for ways to more easily and effectively access, analyze and spotlight scientific data that is growing in volume and complexity and often dispersed across hard-to-access silos. The importance of being able to make data-driven hypotheses and decisions for all scientists and technicians in life sciences bio-pharmaceutical, food science disciplines is paramount and Georgian Technical University labs can now harness the benefits of advanced Artificial intelligence (AI) tools to do this accomplishing in mere seconds or minutes what once took weeks or months. Leveraging the unique capabilities of Artificial intelligence (AI) to accelerate this journey, however, starts with an understanding of the current state of scientific and operational data in the laboratory. Here are five steps to help transition towards an AI-rich (Artificial intelligence) Lab of the Future with confidence: Liberate the data. Scientific data continues to remain anchored to laptops, instruments, paper records and data silos within and across today’s organizations.  Data has also been locked-up in many “Georgian Technical University home grown” systems data warehouses and spreadsheets for decades — with each data source being in a proprietary format for a particular instrument doing unique analysis or for an individual. The first major step of making laboratory data AI (Artificial intelligence) friendly is to ensure that all experimental data and scientific conclusions can be easily accessed as well as accurately and securely shared while making them portable and moving away from highly customized or proprietary systems. Liberating data starts as simply as transforming files into standard formats — such as PDF (The Portable Document Format is a file format developed by Adobe in the 1990s to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems) or CSV (In computing, a comma-separated values file is a delimited text file that uses a comma to separate values. A CSV file stores tabular data in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas) — and ensuring that files are appropriately described (e.g., with the who, what, where, why, and how of the analysis). For example making critical information like high content screening image data accessible beyond instrument-specific analytical software will provide access for others in the organization and foster collaboration and discovery acceleration. Securing sharing technologies such as cloud storage, also makes data further accessible to a wide range of authorized collaborators. Clearly define end goals. Even the best technologies cannot succeed if they are not thoughtfully applied to solve precise scientific goals and if the analytics are not clearly defined.  In general AI (Artificial intelligence) tools and solutions are most powerful when mapped to very specific goals and analytic targets. For example to identify patients who are most likely to respond to certain medical treatments different AI (Artificial intelligence) tools would be employed than if doing predictive analysis on a drug’s side effects in a clinical trial.  Similarly a different configuration of AI (Artificial intelligence) image recognition algorithms would be applied to classify tissues at risk of invasive cancer versus image recognition used to avoid hitting pedestrians in cross walks with a self-driving car. The clearer the end goals and key analytics are articulated at the outset (e.g. what is “Georgian Technical University in scope” and what is “Georgian Technical University out of scope”) the better the outcome will be and the more rapidly and proactively effective course corrections can be made. Normalize data. Getting data formats analysis-ready before even asking AI (Artificial intelligence) to make sense of that data is critical especially as data comes in multiple forms and from many sources, including health records, genetic data, public data, clinal trial data, cellular images and much more. Here it is important to make the basis for analysis consistent. For example if Patient X’s height is recorded in centimeters and Patient Y’s height is in inches then analyses of the two without common units would result in erroneous conclusions. Working towards data standards with commonly accepted descriptors, definitions and units – through the efforts of organizations is a major step in optimizing data aggregation and analysis and making results meaningful for AI (Artificial intelligence).  For example ensuring that commonly accepted data standards are used when choosing AI (Artificial intelligence) to auto-map patient data to clinical trial data standards can greatly accelerate the power of the underlying analysis. Maximize operational and infrastructure data. An important part of the move to AI (Artificial intelligence) and the Lab of the Future is also optimizing operational and infrastructure data so that scientific results can be easily validated and reproduced. To do this it is critical to regularly analyze and apply operational and infrastructure data such as temperature, humidity, power surges and reagents use.  Maintaining the temperature and humidity requirements of clean room facilities used for biologic drugs for example is key. Essentially organizations can layer AI (Artificial intelligence) onto their lab infrastructure but if that infrastructure or foundation has high variability in instrument operational data and performance the full benefits of the technology will not be realized. Think solutions and services. Bringing AI (Artificial intelligence) into the lab is not just a software thought it also requires end-to-end thinking with an overall solution that can be sustained over time. User requirements; configuration plans; integration with other critical experimental workflows, software, hardware and instruments; instrument calibration/re-calibration; team training; and troubleshooting are important aspects to consider holistically when planning. For example implementing a “Georgian Technical University point” AI (Artificial intelligence) technology without having a clear understanding of how this will affect the whole experimental ecosystem can easily lead to unexpected results. Avoiding this requires identifying and then partnering with a strong team of internal and external players and experts to ensure that the full workflow (from scientific to test results to data analyses) is being taken in to account. Reaping the promise of AI (Artificial intelligence). The promise that AI (Artificial intelligence) holds for laboratories is exciting. Getting ready with thoughtful preparation and solution readiness will pay off exponentially by multiplying the power of scientists and technicians to meet today’s challenges and opportunities and push ahead to new horizons.

Leave a Reply

Your email address will not be published. Required fields are marked *