Variance of first hitting times

The inherent difficulty of many problems of discrete mathematics and combinatorial optimization stems, precisely, from the discrete nature of the domains in which these problems are posed. This approach is devoted to a line of research that maps such problems into convex domains where continuum analysis can be easily carried out. This convexification of domains is achieved by assigning probabilistic interpretation to the key elements of the original problems even though these problems are deterministic.

While there are probably other instances of similar ideas being exploited elsewhere, our approach builds on the innovation introduced in Filar and Krass [3] where the Hamiltonian Cycle and the Traveling Salesman Problems were embedded in a structured singularly perturbed Markov Decision Process (MDP, for short). The unifying idea of [3] was to interpret subgraphs traced out by deterministic policies (including Hamiltonian Cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies.

Just to indicate the flavour of the results obtained via this approach, consider a key observation that led to the results presented in Borkar et al [1]-[2]. Namely, that: the “correct” convex domain where Hamiltonian Cycles should be sought, is the set DS of doubly stochastic matrices induced by a given graph.

The above observation is nearly obvious, once we recall the famous (and non-obvious) Birkhoff-von Neumann Theorem which states that the set of all N × N doubly stochastic matrices is the convex hull of permutation matrices.

Of course, in searching for a Hamiltonian Cycle of a given graph we need to restrict ourselves to the convex hull of only those permutation matrices that correspond to subgraphs of that graph. Results in [1]-[2] imply that after a suitable perturbation and defining the random variable

τ1 := the first hitting time of the home node 1 (after time 0);

the Hamiltonian Cycle Problem essentially reduces to “merely” minimizing the variance-like functional

E [(τ1 – N)2]

over the space DS. This probabilistic, almost statistical, interpretation should permit us to bring to bear a wide range of both analytical and algorithmic tools on the HCP.

Thus the theoretical aim of this line of research is to explain the essential difficulty of the Hamiltonian Cycle Problem – that is, its NP-completeness – in analytic terms such as a measure of variability, or the size of a gap between certain optimization problems, or by the nature of certain singularities. The algorithmic aim of these studies is to construct a general purpose heuristic algorithm for the HCP and is based on the belief that some classical “static” optimization problems can be analyzed by embedding them in suitably constructed Markov Decision Processes. In our setting, the theoretical and algorithmic aims are not separate. Indeed, results on one of these aims seem to influence progress on the other.

PUBLICATIONS

[1] Borkar, V.S., Ejov, V. and Filar, J.A. “Directed graphs, Hamiltonicity and doubly stochastic matrices.” Random Structures and Algorithms, 25(4):376-395, 2004. Wiley Online Library

[2] Borkar, V.S., Ejov, V. and Filar, J.A. “On the Hamiltonicity gap and doubly stochastic matrices.” Random Structures and Algorithms, 34(4):502-519, 2009. Wiley Online Library

[3] Filar, J.A. and Krass, D. “Hamiltonian Cycles and Markov Chains.” Mathematics of Operations Research, 19(1):223-237, 1994. JSTOR