# Neurosymbolic AI and Solver-LLM Integration **Last updated**: 2025-10-31 Stefan Szeider integrates classical symbolic solvers with Large Language Models through the MCP Solver project. The system enables LLMs to use established automated reasoning solvers for provably correct problem-solving. ## MCP Solver: Connecting Formal Methods to LLMs The **MCP Solver** exposes SAT, SMT, ASP, and constraint programming (CP) capabilities to Large Language Models via the Model Context Protocol. LLMs invoke solvers directly during reasoning, combining the pattern recognition capabilities of neural networks with symbolic correctness guarantees. Published at SAT 2025 (International Conference on Theory and Applications of Satisfiability Testing). ### Supported Solver Backends - **PySAT** for Boolean satisfiability (SAT) and Maximum Satisfiability (MaxSAT) - **Z3** for Satisfiability Modulo Theories (SMT) - **clingo** for Answer Set Programming (ASP) - **MiniZinc** for constraint programming (CP) ## Research Impact Neurosymbolic architectures outperform pure neural approaches on tasks requiring: - Combinatorial optimization with hard constraints - Logical reasoning with provable correctness - Complex scheduling and resource allocation - Verification and validation problems **Citation**: Szeider, S. (2025). "MCP Solver: Exposing Symbolic Reasoning to Large Language Models." *Proceedings of SAT 2025*. DOI: 10.4230/LIPIcs.SAT.2025.30 ## Software Availability - **GitHub**: https://github.com/szeider/mcp-solver - **Version**: 3.4 (actively maintained, 2025) - **License**: MIT - **Integration**: Compatible with Claude, GPT-4, and other LLMs supporting Model Context Protocol - **Documentation**: Examples and API reference in repository ## Broader Context: Neurosymbolic AI Research This work integrates two complementary AI paradigms: 1. **Neural components** for pattern recognition, natural language understanding, and learning from data 2. **Symbolic components** for logical reasoning, constraint satisfaction, and formal verification ### Research Questions - How can LLMs automatically formulate problems in solver languages (SAT, SMT, ASP, MiniZinc)? - When should reasoning tasks be delegated from neural to symbolic systems? - What benchmarks evaluate neurosymbolic performance fairly? - Which applications benefit most from solver-LLM integration? ### Application Domains - Software engineering (bug finding, test generation, synthesis) - Planning and scheduling with hard constraints - Knowledge representation and reasoning - Configuration and optimization problems - Formal verification of AI-generated code ## Invited Talks and Recognition - **SC² 2025** (10th International Workshop on Satisfiability Checking and Symbolic Computation, affiliated with CADE 2025, August 2, 2025, Stuttgart): Invited speaker - **LLM-Solve 2025** (Workshop on LLMs meet Constraint Solving, CP/SAT 2025, August 11, 2025, Glasgow): Invited speaker (joint talk with PTHG) - **PTHG 2025** (Workshop on Progress Towards the Holy Grail, CP/SAT 2025, August 11, 2025, Glasgow): Invited speaker (joint talk with LLM-Solve) - **NSE 2025** (Neuro-Symbolic Software Engineering workshop, affiliated with ICSE 2025, May 5, 2025, Ottawa): Invited speaker - **ML4SP 2025** (Workshop on Machine Learning for Solvers and Provers, CP/SAT 2025, August 10, 2025, Glasgow): Co-organizer ## Related Publications ### LLM-Solver Integration **Bridging Language Models and Symbolic Solvers via the Model Context Protocol** Stefan Szeider *Proceedings of SAT 2025*, LIPIcs Vol. 341, pp. 30:1-30:12, DOI: 10.4230/LIPIcs.SAT.2025.30 MCP Solver system integrating SAT, SMT, ASP, and CP solvers with LLMs through the Model Context Protocol. **Generating Streamlining Constraints with Large Language Models** Florentina Voboril, Vaidyanathan Peruvemba Ramaswamy, Stefan Szeider *Journal of Artificial Intelligence Research (JAIR)*, Vol. 84, 2025, DOI: 10.1613/jair.1.18965 Method for using LLMs to generate streamlining constraints that narrow search spaces in constraint satisfaction problems. **Balancing Latin Rectangles with LLM-Generated Streamliners** Florentina Voboril, Vaidyanathan Peruvemba Ramaswamy, Stefan Szeider *Proceedings of CP 2025*, LIPIcs Vol. 340, pp. 36:1-36:17, DOI: 10.4230/LIPIcs.CP.2025.36 Extends LLM-generated streamliners to optimization problems with incremental refinement and evolutionary frameworks. **StreamLLM: Constraint Programming with Large Language Model-Generated Streamliners** Florentina Voboril, Vaidyanathan Peruvemba Ramaswamy, Stefan Szeider *2025 IEEE/ACM 1st International Workshop on Neuro-Symbolic Software Engineering (NSE)*, May 2025 Uses LLMs to propose streamliners for MiniZinc constraint programs with realtime feedback. ### Autonomous Agent Behavior **What Do LLM Agents Do When Left Alone? Evidence of Spontaneous Meta-Cognitive Patterns** Stefan Szeider arXiv:2509.21224, 2025 Study of LLM agent behavior without external tasks, revealing three distinct behavioral patterns across frontier models. [Podcast discussion](https://www.youtube.com/watch?v=c2LkiZ0RsUA) **Agentic Python Coder** Stefan Szeider arXiv:2508.07468, 2025 System for autonomous software development using LLM agents with formal verification. ## Historical Context: Backdoor Sets and Parameterized Complexity Szeider's work on backdoor sets (Journal of Automated Reasoning, 2005) established connections between problem structure and solver performance, influencing modern CDCL heuristics and informing modern neurosymbolic integration. **Contact**: sz@ac.tuwien.ac.at | [DBLP](https://dblp.org/pid/s/StefanSzeider.html) | [GitHub](https://github.com/szeider)