New AI models could slash energy use while dramatically improving performance
Neuro-symbolic AI combines neural network pattern recognition and generation with higher level symbolic reasoning
Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terrawatt hours of power in 2024—more than 10% of that year’s nationwide energy output—and it’s expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. The research will be presented at the International Conference of Robotics and Automation in Vienna in May and published in the conference proceedings.
Scheutz and his team focus their work on robots interacting with humans, so the AI technologies they employ are not the type of screen-based large language models (LLMs) like ChatGPT and Gemini, for example. Instead, they study visual-language-action (VLA) models, which are an extension of LLMs with visual and movement capabilities for robots. These models use camera and language inputs and respond by generating actions in the real world, like moving a robot’s wheels, legs, arms, and fingers.
Using conventional, resource-intensive VLA approaches, if a robot were asked to pile blocks in a simple tower, the system might scan the setting, identify the location of blocks, their shape and orientation, and interpret the instruction to place each block on top of the other. The attempt to do so may lead to, for example, misinterpreting the shape of a block due to shadows, misplacing a block, or stacking the blocks in a way that they would tip over.
Going back to LLMs by analogy, the missed attempts by the robots are akin to a chatbot providing inaccurate or even completely false results in text or images. Famous examples include making up imagined court cases for legal briefs or showing people with six fingers.
Symbolic reasoning is more efficient than the conventional approach, coming up with more general planning strategies based on puzzle rules and abstract categories like block shape and centers of mass.
In tests using a standard Tower of Hanoi puzzle, the neuro‑symbolic VLA system had a 95% success rate, compared with 34% for standard VLAs. For a more complex version of the puzzle that the robot had not seen in training, the neuro-symbolic system had a 78% success rate, while standard VLAs failed every attempt.
The neuro-symbolic system could be trained in just 34 minutes, while the standard VLA model took over a day and a half. Significantly, training the neuro-symbolic model used only 1% of the energy required to train a VLA model, and the energy savings continued during execution of tasks with the neuro-symbolic model using only 5% of the energy required for running the VLA.
Scheutz draws parallels to familiar LLMs like ChatGPT or Gemini. “These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings.”
With the explosion in user demand for AI systems, and their integration into industrial applications, there is a competitive arms race for ever larger data center systems—facilities whose power usage can add up to hundreds of megawatts, far more than typically needed to power small cities.
The researchers conclude that current LLMs and VLAs, despite their popularity, may not be the right foundation for energy‑efficient, reliable AI, and may take us right up against a wall of resource limitations. Instead, they suggest that hybrid neuro‑symbolic AI could provide a more sustainable and dependable path forward.
END
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. The research will be presented at the International Conference of Robotics and Automation in Vienna in May and published in the conference proceedings.
Scheutz and his team focus their work on robots interacting with humans, so the AI technologies they employ are not the type of screen-based large language models (LLMs) like ChatGPT and Gemini, for example. Instead, they study visual-language-action (VLA) models, which are an extension of LLMs with visual and movement capabilities for robots. These models use camera and language inputs and respond by generating actions in the real world, like moving a robot’s wheels, legs, arms, and fingers.
Using conventional, resource-intensive VLA approaches, if a robot were asked to pile blocks in a simple tower, the system might scan the setting, identify the location of blocks, their shape and orientation, and interpret the instruction to place each block on top of the other. The attempt to do so may lead to, for example, misinterpreting the shape of a block due to shadows, misplacing a block, or stacking the blocks in a way that they would tip over.
Going back to LLMs by analogy, the missed attempts by the robots are akin to a chatbot providing inaccurate or even completely false results in text or images. Famous examples include making up imagined court cases for legal briefs or showing people with six fingers.
Symbolic reasoning is more efficient than the conventional approach, coming up with more general planning strategies based on puzzle rules and abstract categories like block shape and centers of mass.
How Neuro‑Symbolic Systems Work Better
“Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors,” said Scheutz. “A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster. Not only does it complete the task much faster, but the time spent on training the system is significantly reduced.”In tests using a standard Tower of Hanoi puzzle, the neuro‑symbolic VLA system had a 95% success rate, compared with 34% for standard VLAs. For a more complex version of the puzzle that the robot had not seen in training, the neuro-symbolic system had a 78% success rate, while standard VLAs failed every attempt.
The neuro-symbolic system could be trained in just 34 minutes, while the standard VLA model took over a day and a half. Significantly, training the neuro-symbolic model used only 1% of the energy required to train a VLA model, and the energy savings continued during execution of tasks with the neuro-symbolic model using only 5% of the energy required for running the VLA.
Scheutz draws parallels to familiar LLMs like ChatGPT or Gemini. “These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings.”
With the explosion in user demand for AI systems, and their integration into industrial applications, there is a competitive arms race for ever larger data center systems—facilities whose power usage can add up to hundreds of megawatts, far more than typically needed to power small cities.
The researchers conclude that current LLMs and VLAs, despite their popularity, may not be the right foundation for energy‑efficient, reliable AI, and may take us right up against a wall of resource limitations. Instead, they suggest that hybrid neuro‑symbolic AI could provide a more sustainable and dependable path forward.
END
