# Branch-and-fix approach

In 2000, Feinberg suggested an embedding of HCP in a discounted MDP. Feinberg introduced a polytope Xβ defined for any given graph, such that every Hamiltonian cycle in the graph corresponds to an extreme point of Xβ . Nguyen  characterised all extreme points of Xβ, and showed they either correspond to Hamiltonian cycles, or 1-randomised policies where the decision made at any given vertex is deterministic for all but one vertex, where two choices have nonzero probability. It can be easily seen that 1-randomised policies are simply a convex combination of two deterministic policies. In view of the fact that it is only 1-randomised policies that prevent standard implementations of the simplex method from finding a Hamiltonian cycle, it has been recognised for some time that branch and bound-type methods can be used to eliminate the possibility of arriving at these undesirable extreme points (see, for instance, Filar and Lasserre ). However, the method reported in Filar and Lasserre uses an embedding in a long-run average MDP, with a perturbation of transition probabilities that introduces a small parameter in most of the coefficients of variables, thereby leading to loss of sparsity. Furthermore, the method in Filar and Lasserre was never implemented fully, or tested beyond a few simple examples.

The characterisation given in Nguyen  indicates that 1-randomised policies induced by extreme points of Xβ are less prevalent than might have been conjectured, since they cannot be constructed from convex combinations of just any two deterministic policies. This provides a motivation for testing algorithmic approaches based on successive elimination of arcs that could be used to construct these convex combinations. Since our goal is to find an extreme point xe ∈ Xβ such that xe corresponds to a deterministic policy, we have a number of degrees of freedom in designing an algorithm. In particular, different linear objective functions can be chosen at each stage of the algorithm, the parameter β  ∈ (0,1) can be adjusted, and μ ∈ (0, 1/N) can be chosen small but not so small as to cause numerical difficulties. The latter parameter needs to be positive to ensure that the inverse map M-1, which converts extreme points into policies, is well-defined.

The branch-and-fix method is as follows. We solve a sequence of linear programs – two at each branching point of the logical B&F tree – with the generic structure

min L(x)

s.t.

x ∈ Xβ, and additional constraints, if any, on arcs fixed earlier.

Step 1 – Initiation. We solve the original LP without any additional constraints and with some choice of an objective function L(x). We obtain an optimal basic feasible solution x0. We then find f0 = M-1(x0). If f0 corresponds to a deterministic policy, we stop, the policy f0 identifies a Hamiltonian cycle. Otherwise, f0 is a 1-randomised policy.

Step 2 – Branching. We use the 1-randomised policy f0 to identify the splitting vertex i, and two arcs (i, j1) and (i, j2) corresponding to the single randomisation in f0. If there are d arcs {(i, a1), … ,(i, ad)} emanating from vertex i, we construct d subdigraphs: G1, G2, … , Gd, where in Gk the arc (i, ak) is the only arc emanating from vertex i. These graphs are identical to the original graph G at all other vertices. In this process we, by default, fix an arc in each Gk.

Step 3 – Fixing. In many subdigraphs, the fixing of one arc implies that other arcs can also be fixed, without a possibility of unintentionally eliminating a Hamiltonian cycle containing already fixed arcs that are part of a Hamiltonian cycle in the current subdigraph. Four checks for determining additional arcs that can be fixed are described later in this section. Once we identify these arcs, we also fix them at this step.

Step 4 – Iteration. We solve a second LP that checks if certain bounds can still be satisfied with the current fixing of arcs. If so, we repeat Step 1 with the LP  constructed for the graph at the current branching point of the logical B&F tree, with additional constraints detailed in Ejov et al . This branching point may correspond to G1, G2, … , Gd, or to a sub-graph constructed from one of these with the help of additional arc fixing.

As is typical with branching methods, decisions guiding which branch to select first are important and open to alternative heuristics. Ejov et al  investigate five possible branching methods.

The branch-and-fix method is investigated in detail in Haythorpe  and experimental results are given for many instances. In addition, new constraints, called Wedge constraints, are added to the bound fathoming checks, that result in significant improvements to the efficiency of the heuristic. These improvements ultimately inspired the Wedged-MIP Heuristic.

### PUBLICATIONS

 Ejov, V., Filar, J.A., Haythorpe, M. and Nguyen, G.T. “Refined MDP-based branch-and-fix algorithm for the Hamiltonian Cycle Problem.” Mathematics of Operations Research, 34(3):758-768, 2009. ACM Digital Library

 Feinberg, E. “Constrained discounted Markov decision processes and Hamiltonian cycles.” Mathematics of Operations Research, 25(1):130-140, 2000. JSTOR

 Filar, J.A. and Lasserre, J.B. “A non-standard branch and bound method for the Hamiltonian cycle problem.” Australian and New Zealand Industrial and Applied Mathematics Journal, 42:586-607, 2000.ANZIAM Journal

 Haythorpe, M. “Markov Chain based algorithms for the Hamiltonian cycle problem.” University of South Australia, 2010. Systems Optimization Laboratory

 Nguyen, G.T. “Hamiltonian cycle problem, Markov decision processes and graph spectra.” University of South Australia, 2009.