Abstract
Despite its NP-completeness, the Boolean satisfiability problem gave birth to highly efficient tools that are able to find solutions to a Boolean formula and compute their number.
Boolean formulae compactly encode huge, constrained search spaces for variability-intensive systems, e.g., the possible configurations of the Linux kernel.
These search spaces are generally too big to explore exhaustively, leading most testing approaches to sample a
few solutions before analysing them. A desirable property of such samples is \textit{uniformity}: each solution should get the same selection probability.
This property motivated the design of uniform random samplers, relying on SAT solvers and counters and achieving different tradeoffs between uniformity and scalability.
Though we can observe their performance in practice, understanding the complexity these tools face and accurately predicting it is an under-explored problem.
Indeed, structural metrics such as the number of variables and clauses involved in a formula poorly predict the
sampling complexity. More elaborated ones, such as minimal independent support (MIS), are intractable to compute on large formulae.
We provide an efficient parallel algorithm to compute a related metric, the \textit{number of equivalence classes}, and demonstrate that this metric is highly correlated to time and memory usage of uniform random sampling and model counting tools. We explore the role of formula preprocessing on various metrics and show its positive influence on correlations. Relying on these correlations, we train an efficient classifier (F1-score 0.97) to predict whether
uniformly sampling a given formula will exceed a specified budget. Our results allow us to
characterise the similarities and differences between (uniform) sampling, solving and counting.
Boolean formulae compactly encode huge, constrained search spaces for variability-intensive systems, e.g., the possible configurations of the Linux kernel.
These search spaces are generally too big to explore exhaustively, leading most testing approaches to sample a
few solutions before analysing them. A desirable property of such samples is \textit{uniformity}: each solution should get the same selection probability.
This property motivated the design of uniform random samplers, relying on SAT solvers and counters and achieving different tradeoffs between uniformity and scalability.
Though we can observe their performance in practice, understanding the complexity these tools face and accurately predicting it is an under-explored problem.
Indeed, structural metrics such as the number of variables and clauses involved in a formula poorly predict the
sampling complexity. More elaborated ones, such as minimal independent support (MIS), are intractable to compute on large formulae.
We provide an efficient parallel algorithm to compute a related metric, the \textit{number of equivalence classes}, and demonstrate that this metric is highly correlated to time and memory usage of uniform random sampling and model counting tools. We explore the role of formula preprocessing on various metrics and show its positive influence on correlations. Relying on these correlations, we train an efficient classifier (F1-score 0.97) to predict whether
uniformly sampling a given formula will exceed a specified budget. Our results allow us to
characterise the similarities and differences between (uniform) sampling, solving and counting.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 2024 |
Event | 12 International Conference On Formal Methods In Software Engineering - Lisbon, Portugal Duration: 14 Apr 2024 → 15 Apr 2024 https://formalise2024.github.io |
Conference
Conference | 12 International Conference On Formal Methods In Software Engineering |
---|---|
Country/Territory | Portugal |
City | Lisbon |
Period | 14/04/24 → 15/04/24 |
Internet address |