Preguss: It Analyzes, It Specifies, It Verifies

Abstract

Fully automated verification of large-scale software and hardware systems is arguably the holy grail of formal methods. Large language models (LLMs) have recently demonstrated their potential for enhancing the degree of automation in formal verification by, e.g., generating formal specifications as essential to deductive verification, yet exhibit poor scalability due to context-length limitations and, more importantly, the difficulty of inferring complex, interprocedural specifications. This paper outlines Preguss – a modular, fine-grained framework for automating the generation and refinement of formal specifications. Preguss synergizes between static analysis and deductive verification by orchestrating two components: (i) potential runtime error (RTE)-guided construction and prioritization of verification units, and (ii) LLM-aided synthesis of interprocedural specifications at the unit level. We envisage that Preguss paves a compelling path towards the automated verification of large-scale programs.

Publication
In LMPL 2025
Zhongyi Wang
Zhongyi Wang
Ph.D. Candidate

My research interests include formal verification and program analysis.

Tengjie Lin
Tengjie Lin
M.Sc. Candidate

My research interests include formal methods and program analysis.

Mingshuai Chen
Mingshuai Chen
ZJU100 Young Professor

My research interests include formal verification, programming theory, and logical aspects of computer science.

Mingqi Yang
Mingqi Yang
Ph.D. Candidate

My research interests include formal verification, programming theory, and mathematical aspects of computer science.