Facebook Pixel Code

Los Alamos National Laboratory (yes, the Los Alamos, where the atomic bomb was developed) has partnered with OpenAI to use A.I. to assist in bioscience research. What Los Alamos is doing here will likely be paralleled in businesses with R&D operations. Its other A.I. initiatives are also worth paying attention to. 

 

The Five Most-Key Takeaways from This Blog Post

  • OpenAI cites vision and voice capabilities of A.I. to be of use in a research setting. This could help with things like lab setup and meeting safety guidelines. By showing a lab setup to an A.I. that has been trained on lab protocols, the A.I. could talk the researchers through meeting safety guidelines and challenges that the setup may pose. 
  • This partnership was announced mid-2024. Los Alamos continues to be of interest for A.I. development, as the U.S. Department of Energy has identified this famed New Mexico research hub as a federal site for housing A.I. data centers
  • Though Los Alamos National Laboratory is a government research institution, its A.I. integrations have lessons for anyone looking to understand how A.I will affect the private sector. For instance, real-time performance reviews of researchers performing tasks. 
  • Current limits, of course, involve A.I. not being a physical element in the lab work itself, in that it can give instructions and recommendations for setting up the lab but cannot physically perform those tasks itself. That is, until A.I.-integrated robots are in the lab alongside scientists. 
  • Overall, Los Alamos’ use of A.I. in a lab setting points to the future of R&D in both the public and private sectors where A.I. can be part manager, part compliance officer, in overseeing researchers’ work. 

 

The Significance for Business Owners

The lab-safety use case is a relevant one for private-sector research labs, as this can help with compliance and, in turn, prevent lawsuits from occurring. 

So, Los Alamos’ early work in using A.I. for compliance could have implications for the private sector by influencing the development of A.I. in research labs across the country.  

Something for business owners to be wary of is when A.I. does not align with protocol, or whether it hallucinates false information. In certain lab settings, a hallucinated direction could actually lead to danger, rather than safety. 

Another potential issue could be expecting researchers to always follow the direction of the A.I., and punishing researchers if they do not. An “A.I. is always right” approach to implementation could stifle innovation, as it could make researchers just follow A.I.’s directions for the sake of keeping higher-ups satisfied.

 

How Is A.I. Also Used in Research Settings

The idea-generation phase is certainly getting a boost from generative A.I. 

As the reader of this blog post is likely well aware, there is general anxiety about A.I. claiming territory in creative activities. Much of this anxiety is well-founded, as A.I. will only get better at generating outputs that are in line with what the users prompt it to create. 

However, there are some use cases for A.I. where the technology will perform at a level that is just not humanly possible. Take pharmaceutical development as one example. Drug design is increasingly dependent on protein structures, specifically in creating molecules that bind with target proteins. 

You have probably heard of large language models (L.L.M.s), those A.I. systems that are trained on a ton of written content. But have you heard of chemical language models (C.L.M.s)? 

Same idea, only instead of Reddit posts and copyrighted works (whoops) the training data is made up of chemical structures. 

These C.L.M.s can generate a huge amount of molecular structures as well as predict the bioactivity of these structures with, say, target proteins in the human body. What it may have taken humans a long time to think up, A.I. could do in short order, expediting the drug-discovery process. 

In these cases, the problem of A.I. hallucination is actually seen as a good thing sometimes. That is because the hallucination can actually end up generating, say, molecular structures that scientists in their right mind may not have come up with themselves. Even if it’s out of left field, the structure may end up being useful. 

 

The Last (But Not Least) Key Takeaway from This Blog Post

Overall, A.I. looks to be useful for helping businesses keep R&D processes compliant and useful, although the looming issue of hallucination along with common-sense issues with A.I. makes it essential for researchers to not listen to A.I. like it is the final word on the subject. 

 

Other Great GO AI Blog Posts

GO AI the blog offers a combination of information about, analysis of, and editorializing on A.I. technologies of interest to business owners, with especial focus on the impact this tech will have on commerce as a whole. 

On a usual week, there are multiple GO AI blog posts going out. Here are some notable recent articles: 

In addition to our GO AI blog, we also have a blog that offers important updates in the world of search engine optimization (SEO), with blog posts like “Google Ends Its Plan to End Third-Party Cookies”