Derandomisation and Randomised Rounding
Derandomisation
-
Sometimes it is possible to βderandomiseβ a randomised algorithm Arand and obtain a deterministic algorithm Adet.
-
The performance of Adet is the same as the expected performance of Arand.
-
When successful, we can use randomisation at no extra cost!
(except a polynomial running time overhead).
-
Different methods for derandomisation.
-
Algorithm: For each variable xi, set xi to 1 with probability 1/2 and to 0 with probability 1/2.
-
Algorithm: Set variable xi to 1 or 0 deterministically, and the remaining variables to 1 with probability 1/2 and to 0 with probability 1/2, as before.
-
To maximise the expected value W of the algorithm.
-
If π is the number of clauses satisfied then we have:
πΌ[π] = πΌ[π|π₯ β1]β
Pr[π₯ β1]+πΌ[π|π₯ β0]β
Pr[π₯ β0]
= 1/2 (πΌ[π|π₯ β1] * +πΌ[π|π₯ β0])
-
We set x1 to 1 if πΌ[π|π₯1 β 1] β₯ πΌ[π|π₯1 β 0] and to 0 otherwise.
-
Generally, if b1 is picked to maximise the conditional expectation, it holds that: πΌ[π|π₯ βπ ]β₯πΌ[π]
Applying this to all variables
-
Assume that we have set variables x1, β¦, xi to b1, β¦ bi this way.
-
We set xi+1 to 1 if this holds and to 0 otherwise.
πΌ[π|π₯1 β π1,π₯2 β π2,β¦,π₯i β πi,π₯i+1 β 1]β₯ πΌ[π|π₯1 β π1,π₯2 β π2,β¦,π₯i β πi, π₯i+1 β 0]
-
Again, if bi+1 is picked to maximise the conditional expectation, it holds that:
πΌ[π|π₯1 β π1,π₯2 β π2,β¦,π₯i β πi,π₯i+1 β πi+1] β₯ πΌ[π]
In the end
-
In the end we have set all variables deterministically (ε»ιζΊε).
-
We have πΌ[π|π₯1 β π1,π₯2 β π2,β¦,π₯i β πi,π₯i+1 β πi+1] β₯ πΌ[π]
-
We know that E[W] β₯ 1/2 * OPT
-
We have devised a deterministic 2-approximation algorithm.
-
Is it polynomial-time? Yes
Computing the expectations
- We have to be able to compute the conditional expectations in polynomial time.
πΌ[π|π₯1 β π1 ,β¦,π₯i βπi]= βπΌ[π|π₯1 βπ1 ,β¦,π₯i βπi]
= β i=1^m Pr[clauseπΆissatisfied|π₯ βπ ,β¦,π₯ βπ]
-
The probability is
-
1 if the variables already set satisfy the clause.
-
1-(1/2)k otherwise, where k is the set of unset variables.
Method of conditional expectations
Recall: Deterministic Rounding
-
For the case of vertex cover we can solve the LP-relaxation in polynomial time, to find an optimal solution.
-
The optimal solution is a βfractionalβ vertex cover, where variables can take values between 0 and 1.
-
We round the fractional solution to an integer solution.
-
We pick a variable xi and we set it to 1 or 0.
-
If we set everything to 0, it is not a vertex cover.
-
If we set everything to 1, we βpayβ too much.
-
We set variable xi to 1 if xi β₯ 1/2 and to 0 otherwise.
Randomised Rounding
-
We formulate the problem as an ILP.
-
We write the LP-relaxation.
-
We solve the LP-relaxation.
-
We round the variables with probabilities that can depend on their values.
-
Let (y*, z*) be a solution to the LP-relaxation.
-
Rounding: Set xi to true independently with probability yi*.
- e.g., if y* = (1/3, 1/4, 5/6, 1/2, β¦) we will set variables x1, x2, x3, x4, β¦ to true with probabilities 1/3, 1/4, 5/6, 1/2, β¦ respectively.
MAX SAT as an ILP
Variables: yi = 1 if xi is true and 0 otherwise.
We denote clause Cj by β π₯π β¨ β π₯π πβππ πβππ
Variables: zj = 1 if clause Cj is satisfied and 0 otherwise.
We have the inequality: β π¦i + β (1 β π¦i ) β₯ π§j




Randomised Rounding for MAX-SAT
The better of the two



Algorithms for MAX-SAT
-
Our Randomised Rounding algorithm gives an approximation ratio of 1/(1-1/e) β 1.59.
-
This is better than 2. (unbiased coin flip for each variable)
-
This is better than 1.618. (why this?)
-
The βbetter of the twoβ algorithm has approximation ratio 4/3 β 1.33.
Back to Randomised Rounding
Integrality Gap of MAX-SAT
-
Consider the formula: (x1 β΅x2)β(βx1 β΅x2)β(x1 β΅βx2)β(βx1 β΅βx2)
-
The optimal integral solution satisfied 3 clauses.
-
The optimal fractional solution sets
y1= y2 = 1/2 and zj = 1 for all j and satisfies 4 clauses.
-
The integrality gap is at least 4/3.
What does this mean?
-
We can not hope to design an LP-relaxation and rounding-based algorithm (for this ILP formulation) that outperforms our βbetter of the twoβ algorithm.
-
Can we design one that matches the 4/3 approximation ratio?
-
Yes we can!
-
Instead of βSet xi to true independently with probability yi*β,
-
We use βSet xi to true independently with probability f(yi*), for some function f.
-
Which function f?
- Any function such that 1 - 4-x β€ f(x) β€ 4x-1


Algorithms for MAX-SAT
-
Our first RR algorithm gives an approximation ratio of 1/(1-1/e) β 1.59.
-
This is better than 2. (fair coin flip for all variables)
-
This is better than 1.618. (optimal coin flip for all variables)
-
The βbetter of the twoβ algorithm has approximation ratio 4/3 β 1.33.
-
The more sophisticated RR algorithm has an approximation ratio of 4/3 β 1.33.