DPLL is a complete algorithm based on backtracking for solving the SAT problem. The SAT problem consists in deciding the satisfiability of a
propositional logic formulae expressed in CNF (conjunctive normal form). A logic formulae is satisfiable when there is at least one
boolean assignment for its variables that makes the formulae true, and is a contradiction when no assignment accomplishes this.
The CNF is a particular format for expressing logic formulae. In this notation, a literal is a single boolean variable, either direct or
denied (like A or NOT A). A clause in the formulae is an OR of more literals, and finally a complete CNF formulae is an AND of all the
clauses. It's possible to translate every propositional logic formulae in CNF (in fact, the CNF is equal to the Product of Sums form, which can
be derived using the maxterms of the commutation function), although the result may be very long and un-readable.
In my algorithm, every clause is represented by a Clausola object, while the formula is a list of those objects. The literals are represented
by integers ranging from -N to N (N is the number of literals present that the class must know in advance) without the 0. For example 1 is the
equivalent of a variable x1, and -1 stands for NOT x1. The assignment is represented by a N-sized array of booleans, so that A[i-1] = boolean
assignment of the variable i (not the literal! The variables are always direct, this is important).
The input is the formulae and the number of literals involved, while the output is true if the formulae is satisfiable, false otherwise.
The DPLL, using backtracking, is of course based on the concept of trying each possible assignment until it takes us to satisfy the formulae
(in that case, the algorithm is stopped and true is returned) or to a contradiction (and other routes are tried, if present).
But before doing this, it uses two techniques for simplifying the expression.
1) Unit propagation: if the algorithm finds a clause with just one literal, it immediately assigns the value true to it (because of course
false would make the clause fail) and then "propagate" the new value through the formulae. The clauses that contain that literal are deleted
from the formulae, because they've already been satisfied (a clause is a series of OR, so it's enough that one literal is true), while !literal
is deleted from the clauses that contains it: being False, it won't help us solving the OR in the clause, so the algorithm eliminates it.
Of course, after every propagation the operation should be started from the beginning, checking for other unitary clauses that may have originated
with the previous operations. You can see all this in the UnitPropagate method: because as I said the literal can be a negative number,
I need a little check before using I use it as an index and sets the right assignment (remember: if propagate = NOT A is a unitary clause, A
should be assigned the value False, because the array represents assignments for the variables, not the literals).
2) Pure literal assign: when one of the literals is present in all the clauses in just in one form (that is, always direct, or always denied), True is
assigned to it (because False would make all the clauses fail) and then I simply used the previous method to update the formulae. Again, pay
attention to the actual boolean assignment, that differs according to the form of the literal.
Then the base cases are checked: if F is empty, the algorithm eliminated all the clauses step by step (see Unit propagation) because
they were true. So all the clauses were true -> F is satisfied. The array of course contains this winning assignment. If F contains an
empty clause, then the algorithm little by little deleted all its literals because they were false. So the literals in a clause were all
false -> the clause is false -> F is not satisfied by this particular model.
Note that these conditions strongly depends on the simplification we did before: without that, they would be useless.
After this, a random literal L in F is chosen two new formulaes are constructed: F1 contains F + L, F2 contains F + !L. The recursive step
basically tries the two possibilites for that literal, and the recursion will find out if one of them is valid. Of course, just one valid
assignment makes the function exits from all the recursive calls and the method returns True, while False just forces the algorithm to take another route.
When all the OR conditions in the recursive calls are failed, we will be sure that F is a contradiction, because no model satisfied it, and
False is ultimately returned to the caller.
There are some optimizations I didn't use because they were too complex. Some of them involves choosing the literal among the most frequent
ones, so that the simplification step does a more important job (instead, I just pick the first one available), and "remembering" the roots of
a particular conflict so that it's not re-created in another route.
Despite an exponential execution time, DPLL is still today the best SAT solver around.
The code (written in Java) was too long to be reported here, because it allows the user to enter the clauses via command line. You can find
the whole source in the Code section. Run the program without any argument to see how to use it. It's still a beta version, so it may
fails with some input if I left some bugs in it.