AI threats in software development revealed in new study from The University of Texas at San Antonio
UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.
Joe Spracklen, a UTSA doctoral student in computer science, led the study on how large language models (LLMs) frequently generate insecure code. His team’s paper has been accepted for publication at the USENIX Security Symposium 2025, a premier cybersecurity ...









