Jeremy

Derandomisation and Randomised Rounding

Derandomisation

  • Sometimes it is possible to β€œderandomise” a randomised algorithm Arand and obtain a deterministic algorithm Adet.

  • The performance of Adet is the same as the expected performance of Arand.

  • When successful, we can use randomisation at no extra cost!

(except a polynomial running time overhead).

  • Different methods for derandomisation.

    • Can be very complicated (pseudo-random generators).

    • Can be relatively simple (conditional expectations).

  • Algorithm: For each variable xi, set xi to 1 with probability 1/2 and to 0 with probability 1/2.

  • Algorithm: Set variable xi to 1 or 0 deterministically, and the remaining variables to 1 with probability 1/2 and to 0 with probability 1/2, as before.

  • To maximise the expected value W of the algorithm.

  • If π‘Š is the number of clauses satisfied then we have:

    𝔼[π‘Š] = 𝔼[π‘Š|π‘₯ ←1]β‹…Pr[π‘₯ ←1]+𝔼[π‘Š|π‘₯ ←0]β‹…Pr[π‘₯ ←0] = 1/2 (𝔼[π‘Š|π‘₯ ←1] * +𝔼[π‘Š|π‘₯ ←0])

  • We set x1 to 1 if 𝔼[π‘Š|π‘₯1 ← 1] β‰₯ 𝔼[π‘Š|π‘₯1 ← 0] and to 0 otherwise.

  • Generally, if b1 is picked to maximise the conditional expectation, it holds that: 𝔼[π‘Š|π‘₯ ←𝑏 ]β‰₯𝔼[π‘Š]

Applying this to all variables

  • Assume that we have set variables x1, …, xi to b1, … bi this way.

  • We set xi+1 to 1 if this holds and to 0 otherwise.

𝔼[π‘Š|π‘₯1 ← 𝑏1,π‘₯2 ← 𝑏2,…,π‘₯i ← 𝑏i,π‘₯i+1 ← 1]β‰₯ 𝔼[π‘Š|π‘₯1 ← 𝑏1,π‘₯2 ← 𝑏2,…,π‘₯i ← 𝑏i, π‘₯i+1 ← 0]

  • Again, if bi+1 is picked to maximise the conditional expectation, it holds that:

    𝔼[π‘Š|π‘₯1 ← 𝑏1,π‘₯2 ← 𝑏2,…,π‘₯i ← 𝑏i,π‘₯i+1 ← 𝑏i+1] β‰₯ 𝔼[π‘Š]

In the end

  • In the end we have set all variables deterministically (εŽ»ιšζœΊεŒ–).

  • We have 𝔼[π‘Š|π‘₯1 ← 𝑏1,π‘₯2 ← 𝑏2,…,π‘₯i ← 𝑏i,π‘₯i+1 ← 𝑏i+1] β‰₯ 𝔼[π‘Š]

  • We know that E[W] β‰₯ 1/2 * OPT

  • We have devised a deterministic 2-approximation algorithm.

  • Is it polynomial-time? Yes

Computing the expectations

  • We have to be able to compute the conditional expectations in polynomial time.

𝔼[π‘Š|π‘₯1 ← 𝑏1 ,…,π‘₯i ←𝑏i]= βˆ‘π”Ό[π‘Œ|π‘₯1 ←𝑏1 ,…,π‘₯i ←𝑏i]

= βˆ‘ i=1^m Pr[clause𝐢issatisfied|π‘₯ ←𝑏 ,…,π‘₯ ←𝑏]

  • The probability is

    • 1 if the variables already set satisfy the clause.

    • 1-(1/2)k otherwise, where k is the set of unset variables.

Method of conditional expectations

  • Derandomisation using conditional expectations.

  • Works for a wide variety of applications as long as

    • The variables are set independently.

    • The conditional expectations can be calculated in polynomial time.

Recall: Deterministic Rounding

  • For the case of vertex cover we can solve the LP-relaxation in polynomial time, to find an optimal solution.

  • The optimal solution is a β€œfractional” vertex cover, where variables can take values between 0 and 1.

  • We round the fractional solution to an integer solution.

    • We pick a variable xi and we set it to 1 or 0.

    • If we set everything to 0, it is not a vertex cover.

    • If we set everything to 1, we β€œpay” too much.

    • We set variable xi to 1 if xi β‰₯ 1/2 and to 0 otherwise.

Randomised Rounding

  • We formulate the problem as an ILP.

  • We write the LP-relaxation.

  • We solve the LP-relaxation.

  • We round the variables with probabilities that can depend on their values.

  • Let (y*, z*) be a solution to the LP-relaxation.

  • Rounding: Set xi to true independently with probability yi*.

    • e.g., if y* = (1/3, 1/4, 5/6, 1/2, …) we will set variables x1, x2, x3, x4, … to true with probabilities 1/3, 1/4, 5/6, 1/2, … respectively.

MAX SAT as an ILP

Variables: yi = 1 if xi is true and 0 otherwise.

We denote clause Cj by ⋁ π‘₯𝑖 ∨ ⋁ π‘₯𝑖 π‘–βˆˆπ‘ƒπ‘— π‘–βˆˆπ‘π‘—

Variables: zj = 1 if clause Cj is satisfied and 0 otherwise.

We have the inequality: βˆ‘ 𝑦i + βˆ‘ (1 βˆ’ 𝑦i ) β‰₯ 𝑧j

Randomised Rounding for MAX-SAT

  • Our randomised algorithm gives an approximation ratio of 1/(1-1/e) β‰ˆ 1.59.

  • This is better than 2.

  • This is better than 1.618. (why this?)

  • Sidenote: 1.618 = Ο†.

The better of the two

Algorithms for MAX-SAT

  • Our Randomised Rounding algorithm gives an approximation ratio of 1/(1-1/e) β‰ˆ 1.59.

  • This is better than 2. (unbiased coin flip for each variable)

  • This is better than 1.618. (why this?)

    • Sidenote: 1.618 = Ο†.
  • The β€œbetter of the two” algorithm has approximation ratio 4/3 β‰ˆ 1.33.

Back to Randomised Rounding

  • Is our RR algorithm the best possible?

  • How do we (attempt to) show that?

    • Integrality gap.

Integrality Gap of MAX-SAT

  • Consider the formula: (x1 ⌡x2)βŒƒ(┐x1 ⌡x2)βŒƒ(x1 βŒ΅β”x2)βŒƒ(┐x1 βŒ΅β”x2)

  • The optimal integral solution satisfied 3 clauses.

  • The optimal fractional solution sets

    y1= y2 = 1/2 and zj = 1 for all j and satisfies 4 clauses.

  • The integrality gap is at least 4/3.

What does this mean?

  • We can not hope to design an LP-relaxation and rounding-based algorithm (for this ILP formulation) that outperforms our β€œbetter of the two” algorithm.

  • Can we design one that matches the 4/3 approximation ratio?

  • Yes we can!

    • Instead of β€œSet xi to true independently with probability yi*”,

    • We use β€œSet xi to true independently with probability f(yi*), for some function f.

    • Which function f?

      • Any function such that 1 - 4-x ≀ f(x) ≀ 4x-1

Algorithms for MAX-SAT

  • Our first RR algorithm gives an approximation ratio of 1/(1-1/e) β‰ˆ 1.59.

  • This is better than 2. (fair coin flip for all variables)

  • This is better than 1.618. (optimal coin flip for all variables)

  • The β€œbetter of the two” algorithm has approximation ratio 4/3 β‰ˆ 1.33.

  • The more sophisticated RR algorithm has an approximation ratio of 4/3 β‰ˆ 1.33.