2025 | Format | Rules | Github |

Constrained Horn Clauses (CHC) is a fragment of First Order Logic (FOL) that is sufficiently expressive to describe many verification, inference, and synthesis problems including inductive invariant inference, model checking of safety properties, inference of procedure summaries, regression verification, and sequential equivalence. The CHC competition (CHC-COMP) will compare state-of-the-art tools for CHC solving with respect to performance and effectiveness on a set of publicly available benchmarks. The winners among participating solvers are recognized by measuring the number of correctly solved benchmarks as well as the runtime. CHC-COMP 2025 was associated with the Workshop on Horn-Clause Verification and Synthesis (HCVS 2025).

Competition Results from 2025

Notes on result tables and plots:

  • Plots are "quantile plots", aggregating points over results sorted by the CPU time it took to produce them (lower is faster, overall score corresponds to the rightmost value)
  • Click on category names for aggregate table including all tools
  • Click on the tool names to see table of individual results
  • Click on plots to enlarge them
  • The linked tables were generated by benchexec

Disclaimer: Results that are inconsistent across tools are currently omitted. This means that some tools miss out on some correct points and some are not punished for giving wrong results, but it is not always clear what is the ground truth, which we will fix in the future.

Participants (Tool, Submitter)

  • CHC2C 1.0 (Mihály Dobos-Kovács, Levente Bajczi, András Vörös), Note: meta-solver
  • ChocoCatalia (Hiroyuki Katsura, Naoki Kobayashi, Ryosuke Sato)
  • Eldarica 2.2 (Hossein Hojjat, Philipp Ruemmer)
  • Golem 0.7.1 (Martin Blicha)
  • LoAT (Florian Frohn, Jürgen Giesl)
  • MuCyc (Kazuki Uehara, Hiroshi Unno)
  • PCSat (Takuma Monma, Hiroshi Unno)
  • ThetaCHC 6.13.2 (Levente Bajczi, Mihály Dobos-Kovács, Márk Somorjai, András Vörös)
  • Ultimate Tree Automizer (Matthias Heizmann, Max Barth, Daniel Dietsch, Dominik Klumpp)
  • Ultimate Unihorn (Matthias Heizmann, Max Barth, Daniel Dietsch, Dominik Klumpp)

LIA-Lin (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Golem 762 410 240388 60260
MuCyc 694 403 399364 399351
LoAT 622 410 486478 129799
Eldarica 651 354 600893 144060
Ultimate Unihorn 435 360 855904 779128
PCSat 607 136 1033267 1033319
Ultimate TreeAutomizer 332 290 1155740 1087873
ThetaCHC 114 376 1457498 1449069
CHC2C 160 5 1162523 734400

LIA (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Golem 734 429 173249 86640
Eldarica 644 375 506038 174340
PCSat 650 367 454110 454104
Ultimate Unihorn 349 245 565136 498393
ThetaCHC 52 140 1536482 1030551
Ultimate TreeAutomizer 34 141 1273739 1194302
CHC2C 162 0 1376846 1073896

LIA-Lin-Arrays (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Eldarica 53 18 110842 28520
PCSat 43 9 154220 154226
Ultimate Unihorn 42 9 45106 40102
ThetaCHC 45 4 123955 123138
Golem 22 19 161173 80595
Ultimate TreeAutomizer 25 8 32333 29858

LIA-Arrays (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Eldarica 1022 678 60848 14963
PCSat 1008 567 256250 256264
Ultimate Unihorn 659 487 102567 67300
Ultimate TreeAutomizer 5 416 2207110 2023544
ThetaCHC 400 8 200002 153837

ADT-LIA (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Choco Catalia 297 146 1044483 1043861
Eldarica 72 162 1468617 367375
PCSat 39 87 1556440 1556422

ADT-LIA-Arrays (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Eldarica 1981 1000 277898 78469
PCSat 1697 321 1489216 1489263

BV (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Eldarica 117 151 399052 95632
ThetaCHC 49 123 470348 396892
PCSat 10 3 109664 109653

LRA-Lin (⇒ compare results)

Tool sat unsat CPU (s) wall (s)
Golem 203 29 73506 18382
Eldarica 97 17 293498 103788
ThetaCHC 73 18 321849 169737
PCSat 4 6 443947 443933

Schedule

  • April 25: solver submission deadline
  • April 25: benchmark submission deadline
  • May 8: presentation of results at SPIN 2025, co-located with ETAPS 2025, Hamilton, Canada

Benchmarks

The competition benchmarks were selected among the benchmarks available on Github. The format of the benchmarks is described here; a detailed description of the benchmark selection process can be found in CHC-COMP 2022: Competition Report.

The following tracks were evaluated at CHC-COMP 2025:

   LIA-Lin:    Linear Integer Arithmetic, linear clauses
   LIA-Nonlin:    Linear Integer Arithmetic, nonlinear clauses
   LIA-Lin-Arrays:    Linear Integer Arithmetic + Arrays, linear clauses
   LIA-Nonlin-Arrays:    Linear Integer Arithmetic + Arrays, nonlinear clauses
   ADT-LIA:    Linear Integer Arithmetic + non-recursive Algebraic data-types, nonlinear clauses
   ADT-LIA-Arrays:    Algebraic data-types + Arrays + Linear Integer Arithmetic, nonlinear clauses
   LRA:    Linear Real Arithmetic
   BV:    Bitvectors

In addition to the theories listed in the track names, benchmarks can also use the Bool theory.

Novel features of CHC-COMP in 2025

  • Model Validation: A model validation demo track was introduced. Participants could opt-in to have their sat models checked independently.
  • Infrastructure: Evaluations ran on the SV-COMP infrastructure at LMU Munich.

Organizers

Big thanks to

  • Dirk Beyer, Po-Chun Chien, Matthias Kettl for support with the experimental setup in 2025
  • SoSy-Lab at LMU Munich for sponsoring computing resources in 2025
  • Everybody who organized and helped with the competition infrastructure over the last years:
    Nikolaj Bjørner, Adrien Champion, Emanuele De Angelis, Grigory Fedyukovich, Hari Govind V K, Arie Gurfinkel, Dejan Jovanovic, and Philipp Ruemmer
  • Aaron Stump and StarExec for computing resources in past years

Previous editions of CHC-COMP


Last updated: 2026-03-25 16:24:53 +0000