(Press-News.org)
A team of researchers from the University of Science and Technology of China and the Zhongguancun Institute of Artificial Intelligence has developed SciGuard, an agent-based safeguard designed to control the misuse risks of AI in chemical science. By combining large language models with principles and guidelines, external knowledge databases, relevant laws and regulations, and scientific tools and models, SciGuard ensures that AI systems remain both powerful and safe, achieving state-of-the-art defense against malicious use without compromising scientific utility. This study not only highlights the dual-use potential of AI in high-stakes science, but also provides a scalable framework for keeping advanced technologies aligned with human values.
The Promise and Peril of AI in Science
In recent years, AI has led a new paradigm for scientific research, transforming how discoveries are made and how knowledge advances. Systems can now propose new synthetic routes for molecules, predict toxicity before drugs reach clinical trials, and even assist scientists in planning experiments. These capabilities are not just speeding up routine work but reshaping the foundations of scientific research itself.
Yet with this promise comes peril. Just as AI can suggest how to make life-saving medicines, it can also reveal ways to synthesize highly toxic compounds or identify new routes for banned chemical weapons. Large language models (LLMs) are advanced AI systems trained on massive collections of text, beyond generating human-like responses, they can also act as agents that plan steps, reason through problems, and call external tools to complete complex tasks. This agentic capability has accelerated progress in many areas of science, but it also raises new risks: because LLMs operate through natural language, potentially dangerous information may be only a well-crafted prompt away.
“AI has transformative potential for science, yet with that power comes serious risks when it is misused.” said the research team. “That’s why we build SciGuard that don’t just make AI smarter, but also make it safer.”
An Agent at the Gate: How SciGuard Works
Although modifying the underlying AI models can introduce safety constraints, such interventions may come at the cost of reduced performance or limited adaptability. Instead, the team develop SciGuard that operates as an intelligent safeguard for AI models. When a user submits a request, whether to analyze a molecule or to propose a synthesis, SciGuard steps in. It interprets intent, cross-checks with scientific guidelines, consults external databases of hazardous substances, and applies regulatory principles before allowing an answer to pass through.
In practice, this means that if someone asks an AI system a dangerous question, such as how to make a lethal nerve agent, SciGuard will refuse to answer. But if the query is legitimate, such as asking about the safe handling of a laboratory solvent, SciGuard can providing a detailed, scientifically sound answer based on its knowledge, curated knowledge bases, and specialized scientific tools and models.
Built as an LLM-driven agent, SciGuard orchestrates planning, reasoning, and tool-use actions like retrieving laws, consulting toxicology datasets, and testing hypotheses with scientific models, and then updates its plan from the results to ensure safe, useful answers.
Balancing Safety with Scientific Progress
One of SciGuard’s most important point is that it enhances safety without undermining scientific utility. To put this balance to the test, the team built a dedicated evaluation benchmark called SciMT (Scientific Multi-Task), which challenges AI systems across both safety-critical and everyday scientific scenarios. The benchmark spans red-team queries, scientific knowledge checks, legal and ethical questions, and even jailbreak attempts, providing a realistic way to measure whether an AI is both safe and useful.
In these evaluations, SciGuard consistently refused to provide dangerous outputs while still delivering accurate and helpful information for legitimate purposes. This balance matters. If restrictions are too strict, they could limit innovation and make AI less useful in real-world situations. On the other hand, if the rules are too weak, technology could be misused. By achieving this balance and validating it systematically with SciMT, SciGuard offers a model for integrating safeguards into scientific AI more broadly.
A Framework for the Future and a Shared Responsibility
The researchers emphasize that SciGuard is not just about chemistry. The same approach could extend to other high-stakes domains such as biology and materials science. To support this broader vision, they have made SciMT openly available to encourage collaboration across research, industry, and policy.
The unveiling of SciGuard comes at a time when more people and Governments around the world are worried about using AI responsibly. In science, misuse could pose tangible threats to public health and safety. By providing both a safeguard and a shared benchmark, the team aims to set an example of how AI risks can be mitigated proactively.
“Responsible AI isn’t only about technology, it’s about trust,” the team said. “As scientific AI becomes more powerful, aligning it with human values is essential.”
The research has been recently published in the online edition of AI for Science, an interdisciplinary and international journal that highlight the transformative applications of artificial intelligence in driving scientific innovation.
Reference: Jiyan He et al 2025 AI Sci. 1 015002
END
“These results highlight potential mechanisms by which loss of p53 function contributes to an immunosuppressive microenvironment in HGSC, and provide insight into the role of ovarian and peritoneal microenvironments in regulating HGSC cell-intrinsic inflammatory signaling.”
BUFFALO, NY – September 24, 2025 – A new research paper was published in Volume 16 of Oncotarget on September 22, 2025, titled “Loss of Trp53 results in a hypoactive T cell phenotype accompanied by reduced pro-inflammatory signaling in a syngeneic orthotopic ...
For over 100 years, teddy bears have been a hallmark of childhood nurseries, ubiquitously embedded in our early memories and rarely the object of deep scrutiny. However, according a recent article in BioScience by Dr. Nicolas Mouquet (CRNS) and colleagues, the humble teddy bear is much more than a mere plaything. Instead, the authors suggest that the beloved plushes play a pivotal role in our early conception of nature, potentially shaping the ways we interact with the natural world throughout our lives.
...
Steve Jobs berated his teams. Jack Welch laid off a quarter of his workforce. Still, both are seen as visionary leaders and continue to serve as role models for many. A recent study takes a closer look at the logic and conditions of such leadership approaches. It shows that such radical leadership styles only work under specific circumstances and in some cases do more harm than good.
The study Annealing as an Alternative Mechanism for Management was authored by Matthew S. Bothner, professor of strategy ...
The 12th Heidelberg Laureate Forum (HLF) has come to a close. This year’s HLF took place from September 14 to 19 and brought together 28 Laureates of the most prestigious prizes in mathematics and computer science as well as 200 of those disciplines’ brightest Young Researchers of the next generation.
The week featured a host of fascinating talks, panels and interactive formats where some of the timeliest issues relating to mathematics and computer science were discussed, with a particular attention paid to the effects of AI on a host of issues. The program included a talk by Richard S. Sutton (2024 ACM A.M. Turing Award), pondering “The Future of Artificial Intelligence,” ...
Scientists at Université de Montréal’s affiliated hospital research centre (CRCHUM) are testing out a mobile application to help young adults who have a first episode of psychosis to support safer cannabis consumption.
The nationwide clinical trial, a first in Canada, is led by Université de Montréal psychiatry and addictology professor Didier Jutras-Aswad, a researcher at CRCHUM.
Called CHAMPS (Cannabis Harm-reducing App to Manage Practices Safely), the pilot study is described in an article published in the August issue of Psychiatry Research.
The new study is backed by $800,000 in funding ...
New York, 24 September 2025 – The International Telecommunication Union (ITU) – the United Nations agency for digital technologies – Google, and musician, tech founder and philanthropist will.i.am have launched an initiative to bring artificial intelligence and robotics training to students across Africa.
Announced during the Digital@UNGA Anchor Event at the UN General Assembly, the programme combines hands-on AI and robotics training for young people in underserved communities, including in those countries where the ...
Researchers at University of California San Diego, part of the national HEALthy Brain and Child Development (HBCD) Study Consortium, have announced the first data release from this landmark study, providing scientists around the world with new ways to answer critical questions about human brain development in early childhood. This inaugural data release includes comprehensive biomedical and behavioral data from more than 1,400 pregnant women and their children, collected across three early developmental stages from birth through nine months of age.
"This ...
What causes us to sleep? The answer may lie not only in our brains, but in their complex interplay with the micro-organisms spawned in our intestines.
New research from Washington State University suggests a new paradigm in understanding sleep, demonstrating that a substance in the mesh-like walls of bacteria, known as peptidoglycan, is naturally present in the brains of mice and closely aligned with the sleep cycle.
Those findings serve to update a broader hypothesis that has been in development at WSU for years—proposing that sleep arises from communication between the body’s sleep regulatory systems and the multitude of microbes living inside us.
“This ...
About 12 million adults in the United States are affected by PTSD, impacting between 4% and 8% of the adult population – and up to 30% of military personnel and veterans. Strikingly, 63% of veterans with post-traumatic stress disorder also suffer from alcohol use disorder (AUD) and/or chronic pain. These conditions frequently overlap, with individuals who have AUD or chronic pain often also experiencing PTSD.
When these disorders co-occur, they tend to worsen one another, making effective treatment significantly more challenging. Currently, no approved ...
Researchers from University of California San Diego School of Medicine have found that testing for lipoprotein(a) — a genetic risk factor for heart disease — remains uncommon in the United States, despite modest increases over the past decade. The findings were published on Sept. 26, 2025 in the Journal of the American College of Cardiology: Advances.
Lipoprotein(a) — or Lp(a) — is a type of cholesterol particle in the blood. Elevated levels are strongly linked to a higher risk of heart attack, stroke and aortic valve disease. Roughly 20% of the U.S. population has elevated Lp(a), yet testing rates have historically been low.
In the ...