Theory of Computing ReportPluto 1.6.2 on Ruby 3.0.4 (2022-04-12) [x86_64-linux]ECCC Papers: TR22-136 | Rounds vs Communication Tradeoffs for Maximal Independent Sets |
Sepehr Assadi,
Gillat Kol,
Zhijun Zhanghttps://eccc.weizmann.ac.il/report/2022/1362022-09-25T06:24:48+00:00
We consider the problem of finding a maximal independent set (MIS) in the shared blackboard communication model with vertex-partitioned inputs. There are $n$ players corresponding to vertices of an undirected graph, and each player sees the edges incident on its vertex -- this way, each edge is known by both its endpoints and is thus shared by two players. The players communicate in simultaneous rounds by posting their messages on a shared blackboard visible to all players, with the goal of computing an MIS of the graph. While the MIS problem is well studied in other distributed models, and while shared blackboard is, perhaps, the simplest broadcast model, lower bounds for our problem were only known against one-round protocols.
We present a lower bound on the round-communication tradeoff for computing an MIS in this model. Specifically, we show that when $r$ rounds of interaction are allowed, at least one player needs to communicate $\Omega(n^{1/20^{r+1}})$ bits. In particular, with logarithmic bandwidth, finding an MIS requires $\Omega(\log\log{n})$ rounds. This lower bound can be compared with the algorithm of Ghaffari, Gouleakis, Konrad, Mitrovi ?c, and Rubinfeld [PODC 2018] that solves MIS in $O(\log\log{n})$ rounds but with a logarithmic bandwidth for an average player. Additionally, our lower bound further extends to the closely related problem of maximal bipartite matching.
The presence of edge-sharing gives the algorithms in our model a surprising power and numerous algorithmic results exploiting this power are known. For a similar reason, proving lower bounds in this model is much more challenging, as this sharing in the players' inputs prohibits the use of standard number-in-hand communication complexity arguments. Thus, to prove our results, we devise a new round elimination framework, which we call partial-input embedding, that may also be useful in future work for proving round-sensitive lower bounds in the presence of shared inputs.
Finally, we discuss several implications of our results to multi-round (adaptive) distributed sketching algorithms, broadcast congested clique, and to the welfare maximization problem in two-sided matching markets.
ECCC Papershttps://eccc.weizmann.ac.il/ECCC Papers: TR22-135 | Decision Tree Complexity versus Block Sensitivity and Degree |
Swagato Sanyal,
Supartha Poddar,
Rahul Chughhttps://eccc.weizmann.ac.il/report/2022/1352022-09-25T06:22:55+00:00
Relations between the decision tree complexity and various other complexity measures of Boolean functions is a thriving topic of research in computational complexity. While decision tree complexity is long known to be polynomially related with many other measures, the optimal exponents of many of these relations are not known. It is known that decision tree complexity is bounded above by the cube of block sensitivity, and the cube of polynomial degree. However, the widest separation between decision tree complexity and each of block sensitivity and degree that is witnessed by known Boolean functions is quadratic.
Proving quadratic relations between these measures would resolve several open questions in decision tree complexity. For example, we get a tight relation between decision tree complexity and square of randomized decision tree complexity and a tight relation between zero-error randomized decision tree complexity and square of fractional block sensitivity, resolving an open question raised by Aaronson. In this work, we investigate the tightness of the existing cubic upper bounds.
We improve the cubic upper bounds for many interesting classes of Boolean functions. We show that for graph properties and for functions with a constant number of alternations, both of the cubic upper bounds can be improved to quadratic. We define a class of Boolean functions, which we call the zebra functions, that comprises Boolean functions where each monotone path from $0^n$ to $1^n$ has an equal number of alternations. This class contains the symmetric and monotone functions as its subclasses. We show that for any zebra function, decision tree complexity is at most the square of block sensitivity, and certificate complexity is at most the square of degree.
Finally, we show using a lifting theorem of communication complexity by G{\"{o}}{\"{o}}s, Pitassi and Watson that the task of proving an improved upper bound on the decision tree complexity for all functions is in a sense equivalent to the potentially easier task of proving a similar upper bound on communication complexity for each bi-partition of the input variables, for all functions. In particular, this implies that to bound the decision tree complexity it suffices to bound smaller measures like parity decision tree complexity, subcube decision tree complexity and decision tree rank, that are defined in terms of models that can be efficiently simulated by communication protocols.
ECCC Papershttps://eccc.weizmann.ac.il/ECCC Papers: TR22-134 | Some Games on Turing Machines and Power from Random Strings |
Alexey Milovanov,
Greg McLellanhttps://eccc.weizmann.ac.il/report/2022/1342022-09-25T06:21:39+00:00
Denote by $R$ the set of strings with high Kolmogorov complexity. In [E. Allender, H. Buhrman, M. Kouck\'y, D. van Melkebeek, and D. Ronneburger.
Power from random strings.
\emph{SIAM Journal on Computing}, 35:1467--1493, 2006.] the idea of using $R$ as an oracle for resource-bounded computation models was presented. This idea was later developed in several others papers.
We prove new lower bounds for $Q^R_{tt}$ and $Q^R_{sa}$:
- Oblivious-NP is subset of $Q^R_{tt}$;
- Oblivious-MA is subset of $Q^R_{sa}$.
Here $Q$ means quazi-polynomial-time; ``sa'' means sub-adaptive
reduction - a new type of reduction that we introduce. This type of reduction is not weaker than truth-table reduction and is not stronger than Turing reduction.
Also we prove upper bounds for BBP^R_{tt} and P^R_{sa} following [E. Allender, L. Friedman, and W. Gasarch. Limits on the computational power of random
strings.]:
P^R_{sa} is subset of EXP
BBP^R_{tt} is subset of AEXP(poly).
Here AEXP(poly) is the class of languages decidable in exponential time by an alternating Turing machine that switches from an existential to a universal state or vice versa at most polynomial times.
Finally we analyze some games that originate in [E. Allender, L. Friedman, and W. Gasarch. Limits on the computational power of random
strings.]. We prove completeness of these games. These results show that methods in this can not prove better upper bounds for P^R, NP^R and P^R_{tt} than known.
ECCC Papershttps://eccc.weizmann.ac.il/CCI: jobs: Open-Rank Professor of Computer Science at Pomona College (apply by October 15, 2022)http://cstheory-jobs.org/2022/09/23/open-rank-professor-of-computer-science-at-pomona-college-apply-by-october-15-2022/2022-09-23T17:09:49+00:00
<p>Pomona College seeks applications for two Open-Rank (assistant, associate, or full) Professor of Computer Science positions, to begin on July 1, 2023. All subfields of computer science will be considered. Candidates should have a broad background in computer science, be excellent teachers, have an active research program, and be excited about directing undergraduate research.</p>
<p>Website: <a href="https://academicjobsonline.org/ajo/jobs/22190">https://academicjobsonline.org/ajo/jobs/22190</a><br />
Email: cssearch@pomona.edu</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgarXiv: Computational Complexity: Solving the General Case of Rank-3 Maker-Breaker Games in Polynomial Timehttp://arxiv.org/abs/2209.112022022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bahack_L/0/1/0/all/0/1">Lear Bahack</a></p><p>A rank-3 Maker-Breaker game is played on a hypergraph in which all hyperedges
are sets of at most 3 vertices. The two players of the game, called Maker and
Breaker, move alternately. On his turn, maker chooses a vertex to be withdrawn
from all hyperedges, while Breaker on her turn chooses a vertex and delete all
the hyperedges containing that vertex. Maker wins when by the end of his turn
some hyperedge is completely covered, i.e. the last remaining vertex of that
hyperedge is withdrawn. Breaker wins when by the end of her turn, all
hyperedges have been deleted.
</p>
<p>Solving a Maker-Breaker game is the computational problem of choosing an
optimal move, or equivalently, deciding which player has a winning strategy in
a configuration. The complexity of solving two degenerate cases of rank-3 games
has been proven before to be polynomial. In this paper, we show that the
general case of rank-3 Maker-Breaker games is also solvable in polynomial time.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Hyperstable Sets with Voting and Algorithmic Hardness Applicationshttp://arxiv.org/abs/2209.112162022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+Heilman_S/0/1/0/all/0/1">Steven Heilman</a></p><p>The noise stability of a Euclidean set $A$ with correlation $\rho$ is the
probability that $(X,Y)\in A\times A$, where $X,Y$ are standard Gaussian random
vectors with correlation $\rho\in(0,1)$. It is well-known that a Euclidean set
of fixed Gaussian volume that maximizes noise stability must be a half space.
</p>
<p>For a partition of Euclidean space into $m>2$ parts each of Gaussian measure
$1/m$, it is still unknown what sets maximize the sum of their noise
stabilities. In this work, we classify partitions maximizing noise stability
that are also critical points for the derivative of noise stability with
respect to $\rho$. We call a partition satisfying these conditions hyperstable.
Uner the assumption that a maximizing partition is hyperstable, we prove:
</p>
<p>* a (conditional) version of the Plurality is Stablest Conjecture for $3$ or
$4$ candidates.
</p>
<p>* a (conditional) sharp Unique Games Hardness result for MAX-m-CUT for $m=3$
or $4$
</p>
<p>* a (conditional) version of the Propeller Conjecture of Khot and Naor for
$4$ sets.
</p>
<p>We also show that a symmetric set that is hyperstable must be star-shaped.
</p>
<p>For partitions of Euclidean space into $m>2$ parts of fixed (but perhaps
unequal) Gaussian measure, the hyperstable property can only be satisfied when
all of the parts have Gaussian measure $1/m$. So, as our main contribution, we
have identified a possible strategy for proving the full Plurality is Stablest
Conjecture and the full sharp hardness for MAX-m-CUT: to prove both statements,
it suffices to show that sets maximizing noise stability are hyperstable. This
last point is crucial since any proof of the Plurality is Stablest Conjecture
must use a property that is special to partitions of sets into equal measures,
since the conjecture is false in the unequal measure case.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: Output Mode Switching for Parallel Five-bar Manipulators Using a Graph-based Path Plannerhttp://arxiv.org/abs/2209.107432022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Edwards_P/0/1/0/all/0/1">Parker B. Edwards</a>, <a href="http://arxiv.org/find/cs/1/au:+Baskar_A/0/1/0/all/0/1">Aravind Baskar</a>, <a href="http://arxiv.org/find/cs/1/au:+Hills_C/0/1/0/all/0/1">Caroline Hills</a>, <a href="http://arxiv.org/find/cs/1/au:+Plecnik_M/0/1/0/all/0/1">Mark Plecnik</a>, <a href="http://arxiv.org/find/cs/1/au:+Hauenstein_J/0/1/0/all/0/1">Jonathan D. Hauenstein</a></p><p>The configuration manifolds of parallel manipulators exhibit more
nonlinearity than serial manipulators. Qualitatively, they can be seen to
possess extra folds. By projecting such manifolds onto spaces of engineering
relevance, such as an output workspace or an input actuator space, these folds
cast edges that exhibit nonsmooth behavior. For example, inside the global
workspace bounds of a five-bar linkage appear several local workspace bounds
that only constrain certain output modes of the mechanism. The presence of such
boundaries, which manifest in both input and output projections, serve as a
source of confusion when these projections are studied exclusively instead of
the configuration manifold itself. Particularly, the design of nonsymmetric
parallel manipulators has been confounded by the presence of exotic projections
in their input and output spaces. In this paper, we represent the configuration
space with a radius graph, then weight each edge by solving an optimization
problem using homotopy continuation to quantify transmission quality. We then
employ a graph path planner to approximate geodesics between configuration
points that avoid regions of low transmission quality. Our methodology
automatically generates paths capable of transitioning between non-neighboring
output modes, a motion which involves osculating multiple workspace boundaries
(local, global, or both). We apply our technique to two nonsymmetric five-bar
examples that demonstrate how transmission properties and other characteristics
of the workspace can be selected by switching output modes.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Maths, Computation and Flamenco: overview and challengeshttp://arxiv.org/abs/2209.109702022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Diaz_Banez_J/0/1/0/all/0/1">José-Miguel Díaz-Báñez</a>, <a href="http://arxiv.org/find/cs/1/au:+Kroher_N/0/1/0/all/0/1">Nadine Kroher</a></p><p>Flamenco is a rich performance-oriented art music genre from Southern Spain
which attracts a growing community of aficionados around the globe. Due to its
improvisational and expressive nature, its unique musical characteristics, and
the fact that the genre is largely undocumented, flamenco poses a number of
interesting mathematical and computational challenges. Most existing approaches
in Musical Information Retrieval (MIR) were developed in the context of popular
or classical music and do often not generalize well to non-Western music
traditions, in particular when the underlying music theoretical assumptions do
not hold for these genres. Over the recent decade, a number of computational
problems related to the automatic analysis of flamenco music have been defined
and several methods addressing a variety of musical aspects have been proposed.
This paper provides an overview of the challenges which arise in the context of
computational analysis of flamenco music and outlines an overview of existing
approaches.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: Uniform Reliability for Unbounded Homomorphism-Closed Graph Querieshttp://arxiv.org/abs/2209.111772022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Amarilli_A/0/1/0/all/0/1">Antoine Amarilli</a></p><p>We study the uniform query reliability problem, which asks, for a fixed
Boolean query Q, given an instance I, how many subinstances of I satisfy Q.
Equivalently, this is a restricted case of Boolean query evaluation on
tuple-independent probabilistic databases where all facts must have probability
1/2. We focus on graph signatures, and on queries closed under homomorphisms.
We show that for any such query that is unbounded, i.e., not equivalent to a
union of conjunctive queries, the uniform reliability problem is #P-hard. This
recaptures the hardness, e.g., of s-t connectedness, which counts how many
subgraphs of an input graph have a path between a source and a sink.
</p>
<p>This new hardness result on uniform reliability strengthens our earlier
hardness result on probabilistic query evaluation for unbounded
homomorphism-closed queries (ICDT'20). Indeed, our earlier proof crucially used
facts with probability 1, so it did not apply to the unweighted case. The new
proof presented in this paper avoids this; it uses our recent hardness result
on uniform reliability for non-hierarchical conjunctive queries without
self-joins (ICDT'21), along with new techniques.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Efficiently Reconfiguring a Connected Swarm of Labeled Robotshttp://arxiv.org/abs/2209.110282022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Fekete_S/0/1/0/all/0/1">Sándor P. Fekete</a>, <a href="http://arxiv.org/find/cs/1/au:+Kramer_P/0/1/0/all/0/1">Peter Kramer</a>, <a href="http://arxiv.org/find/cs/1/au:+Rieck_C/0/1/0/all/0/1">Christian Rieck</a>, <a href="http://arxiv.org/find/cs/1/au:+Scheffer_C/0/1/0/all/0/1">Christian Scheffer</a>, <a href="http://arxiv.org/find/cs/1/au:+Schmidt_A/0/1/0/all/0/1">Arne Schmidt</a></p><p>When considering motion planning for a swarm of $n$ labeled robots, we need
to rearrange a given start configuration into a desired target configuration
via a sequence of parallel, continuous, collision-free robot motions. The
objective is to reach the new configuration in a minimum amount of time; an
important constraint is to keep the swarm connected at all times. Problems of
this type have been considered before, with recent notable results achieving
constant stretch for not necessarily connected reconfiguration: If mapping the
start configuration to the target configuration requires a maximum Manhattan
distance of $d$, the total duration of an overall schedule can be bounded to
$\mathcal{O}(d)$, which is optimal up to constant factors. However, constant
stretch could only be achieved if disconnected reconfiguration is allowed, or
for scaled configurations (which arise by increasing all dimensions of a given
object by the same multiplicative factor) of unlabeled robots.
</p>
<p>We resolve these major open problems by (1) establishing a lower bound of
$\Omega(\sqrt{n})$ for connected, labeled reconfiguration and, most
importantly, by (2) proving that for scaled arrangements, constant stretch for
connected reconfiguration can be achieved. In addition, we show that (3) it is
NP-hard to decide whether a makespan of 2 can be achieved, while it is possible
to check in polynomial time whether a makespan of 1 can be achieved.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Learning-Augmented Algorithms for Online Linear and Semidefinite Programminghttp://arxiv.org/abs/2209.106142022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Grigorescu_E/0/1/0/all/0/1">Elena Grigorescu</a>, <a href="http://arxiv.org/find/cs/1/au:+Lin_Y/0/1/0/all/0/1">Young-San Lin</a>, <a href="http://arxiv.org/find/cs/1/au:+Silwal_S/0/1/0/all/0/1">Sandeep Silwal</a>, <a href="http://arxiv.org/find/cs/1/au:+Song_M/0/1/0/all/0/1">Maoyuan Song</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhou_S/0/1/0/all/0/1">Samson Zhou</a></p><p>Semidefinite programming (SDP) is a unifying framework that generalizes both
linear programming and quadratically-constrained quadratic programming, while
also yielding efficient solvers, both in theory and in practice. However, there
exist known impossibility results for approximating the optimal solution when
constraints for covering SDPs arrive in an online fashion. In this paper, we
study online covering linear and semidefinite programs in which the algorithm
is augmented with advice from a possibly erroneous predictor. We show that if
the predictor is accurate, we can efficiently bypass these impossibility
results and achieve a constant-factor approximation to the optimal solution,
i.e., consistency. On the other hand, if the predictor is inaccurate, under
some technical conditions, we achieve results that match both the classical
optimal upper bounds and the tight lower bounds up to constant factors, i.e.,
robustness.
</p>
<p>More broadly, we introduce a framework that extends both (1) the online set
cover problem augmented with machine-learning predictors, studied by Bamas,
Maggiori, and Svensson (NeurIPS 2020), and (2) the online covering SDP problem,
initiated by Elad, Kale, and Naor (ICALP 2016). Specifically, we obtain general
online learning-augmented algorithms for covering linear programs with
fractional advice and constraints, and initiate the study of learning-augmented
algorithms for covering SDP problems.
</p>
<p>Our techniques are based on the primal-dual framework of Buchbinder and Naor
(Mathematics of Operations Research, 34, 2009) and can be further adjusted to
handle constraints where the variables lie in a bounded region, i.e., box
constraints.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: A cubic algorithm for computing the Hermite normal form of a nonsingular integer matrixhttp://arxiv.org/abs/2209.106852022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Birmpilis_S/0/1/0/all/0/1">Stavros Birmpilis</a>, <a href="http://arxiv.org/find/cs/1/au:+Labahn_G/0/1/0/all/0/1">George Labahn</a>, <a href="http://arxiv.org/find/cs/1/au:+Storjohann_A/0/1/0/all/0/1">Arne Storjohann</a></p><p>A Las Vegas randomized algorithm is given to compute the Hermite normal form
of a nonsingular integer matrix $A$ of dimension $n$. The algorithm uses
quadratic integer multiplication and cubic matrix multiplication and has
running time bounded by $O(n^3 (\log n + \log ||A||)^2(\log n)^2)$ bit
operations, where $||A||= \max_{ij} |A_{ij}|$ denotes the largest entry of $A$
in absolute value. A variant of the algorithm that uses pseudo-linear integer
multiplication is given that has running time $(n^3 \log ||A||)^{1+o(1)}$ bit
operations, where the exponent $"+o(1)"$ captures additional factors $c_1 (\log
n)^{c_2} (\log \log ||A||)^{c_3}$ for positive real constants $c_1,c_2,c_3$.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Popular Edges with Critical Nodeshttp://arxiv.org/abs/2209.108052022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Chatterjee_K/0/1/0/all/0/1">Kushagra Chatterjee</a>, <a href="http://arxiv.org/find/cs/1/au:+Nimbhorkar_P/0/1/0/all/0/1">Prajakta Nimbhorkar</a></p><p>In the popular edge problem, the input is a bipartite graph $G = (A \cup
B,E)$ where $A$ and $B$ denote a set of men and a set of women respectively,
and each vertex in $A\cup B$ has a strict preference ordering over its
neighbours. A matching $M$ in $G$ is said to be {\em popular} if there is no
other matching $M'$ such that the number of vertices that prefer $M'$ to $M$ is
more than the number of vertices that prefer $M$ to $M'$. The goal is to
determine, whether a given edge $e$ belongs to some popular matching in $G$. A
polynomial-time algorithm for this problem appears in \cite{CK18}. We consider
the popular edge problem when some men or women are prioritized or critical. A
matching that matches all the critical nodes is termed as a feasible matching.
It follows from \cite{Kavitha14,Kavitha21,NNRS21,NN17} that, when $G$ admits a
feasible matching, there always exists a matching that is popular among all
feasible matchings. We give a polynomial-time algorithm for the popular edge
problem in the presence of critical men or women. We also show that an
analogous result does not hold in the many-to-one setting, which is known as
the Hospital-Residents Problem in literature, even when there are no critical
nodes.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Canadian Traveller Problem with Predictionshttp://arxiv.org/abs/2209.111002022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bampis_E/0/1/0/all/0/1">Evripidis Bampis</a>, <a href="http://arxiv.org/find/cs/1/au:+Escoffier_B/0/1/0/all/0/1">Bruno Escoffier</a>, <a href="http://arxiv.org/find/cs/1/au:+Xefteris_M/0/1/0/all/0/1">Michalis Xefteris</a></p><p>In this work, we consider the $k$-Canadian Traveller Problem ($k$-CTP) under
the learning-augmented framework proposed by Lykouris & Vassilvitskii. $k$-CTP
is a generalization of the shortest path problem, and involves a traveller who
knows the entire graph in advance and wishes to find the shortest route from a
source vertex $s$ to a destination vertex $t$, but discovers online that some
edges (up to $k$) are blocked once reaching them. A potentially imperfect
predictor gives us the number and the locations of the blocked edges.
</p>
<p>We present a deterministic and a randomized online algorithm for the
learning-augmented $k$-CTP that achieve a tradeoff between consistency (quality
of the solution when the prediction is correct) and robustness (quality of the
solution when there are errors in the prediction). Moreover, we prove a
matching lower bound for the deterministic case establishing that the tradeoff
between consistency and robustness is optimal, and show a lower bound for the
randomized algorithm. Finally, we prove several deterministic and randomized
lower bounds on the competitive ratio of $k$-CTP depending on the prediction
error, and complement them, in most cases, with matching upper bounds.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Approximating $(p,2)$ flexible graph connectivity via the primal-dual methodhttp://arxiv.org/abs/2209.112092022-09-23T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bansal_I/0/1/0/all/0/1">Ishan Bansal</a>, <a href="http://arxiv.org/find/cs/1/au:+Cheriyan_J/0/1/0/all/0/1">Joseph Cheriyan</a>, <a href="http://arxiv.org/find/cs/1/au:+Grout_L/0/1/0/all/0/1">Logan Grout</a>, <a href="http://arxiv.org/find/cs/1/au:+Ibrahimpur_S/0/1/0/all/0/1">Sharat Ibrahimpur</a></p><p>We consider the Flexible Graph Connectivity model (denoted FGC) introduced by
Adjiashvili, Hommelsheim and M\"uhlenthaler (IPCO 2020, Mathematical
Programming 2021), and its generalization, $(p,q)$-FGC, where $p \geq 1$ and $q
\geq 0$ are integers, introduced by Boyd et al.\ (FSTTCS 2021). In the
$(p,q)$-FGC model, we have an undirected connected graph $G=(V,E)$,
non-negative costs $c$ on the edges, and a partition $(\mathcal{S},
\mathcal{U})$ of $E$ into a set of safe edges $\mathcal{S}$ and a set of unsafe
edges $\mathcal{U}$. A subset $F \subseteq E$ of edges is called feasible if
for any set $F'\subseteq\mathcal{U}$ with $|F'| \leq q$, the subgraph $(V, F
\setminus F')$ is $p$-edge connected. The goal is to find a feasible edge-set
of minimum cost.
</p>
<p>For the special case of $(p,q)$-FGC when $q = 2$, we give an $O(1)$
approximation algorithm, thus improving on the logarithmic approximation ratio
of Boyd et al. (FSTTCS 2021). Our algorithm is based on the primal-dual method
for covering an uncrossable family, due to Williamson et al. (Combinatorica
1995). We conclude by studying weakly uncrossable families, which are a
generalization of the well-known notion of an uncrossable family.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentCCI: jobs: Faculty at Claremont McKenna College (apply by November 15, 2022)http://cstheory-jobs.org/2022/09/22/faculty-at-claremont-mckenna-college-apply-by-november-15-2022/2022-09-22T17:40:49+00:00
<p>The Department of Mathematical Sciences at Claremont McKenna College invites applications for a tenure-track position, at the assistant professor level, in Probability, Statistics, and Statistical Computing.</p>
<p>Website: <a href="https://www.mathjobs.org/jobs/list/20279">https://www.mathjobs.org/jobs/list/20279</a><br />
Email: sarah.cannon@cmc.edu; Ckao@claremontmckenna.edu</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Tenure track assistant professor at CUNY’s Baruch College (apply by November 7, 2022)http://cstheory-jobs.org/2022/09/22/tenure-track-assistant-professor-at-cunys-baruch-college-apply-by-november-7-2022/2022-09-22T17:00:15+00:00
<p>Baruch College, part of CUNY, lies at the heart of Manhattan. It is regularly ranked as the country’s top college for social mobility. Since Baruch College was traditionally CUNY’s business school, it did not include Computer Science. Our computer science major will start in August 2023. We are hiring professors that will help shape and grow computer science at Baruch.</p>
<p>Website: <a href="https://geometrynyc.wixsite.com/csjobs">https://geometrynyc.wixsite.com/csjobs</a><br />
Email: warren.gordon@baruch.cuny.edu</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgRichard Lipton: Cheating at Chess—Not Againhttps://rjlipton.wpcomstaging.com/?p=204202022-09-22T03:59:28+00:00
<p><font color="#0044cc"><br />
<em>Play the opening like a book, the middle game like a magician, and the end game like a machine — Rudolf Spielmann</em><br />
<font color="#000000"></p>
<p><a href="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/kenglenveagh/" rel="attachment wp-att-20422"><img data-attachment-id="20422" data-permalink="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/kenglenveagh/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenGlenveagh.jpeg?fit=185%2C272&ssl=1" data-orig-size="185,272" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="KenGlenveagh" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenGlenveagh.jpeg?fit=185%2C272&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenGlenveagh.jpeg?fit=185%2C272&ssl=1" loading="lazy" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenGlenveagh.jpeg?resize=123%2C181&ssl=1" alt="" width="123" height="181" class="alignright wp-image-20422" data-recalc-dims="1" /></a></p>
<p>
Kenneth Regan is my dear friend and co-writer of this blog. He obtained his doctorate—technically D.Phil not PhD—in 1986 for a thesis titled <em>On the Separation of Complexity Classes</em> from the University of Oxford under Dominic Welsh. He has, however, been enmeshed this month in a story quite separate from complexity classes.</p>
<p>
It was Ken’s birthday just last week and we wish him many more.</p>
<p>
<p><H2> Cheating at Chess </H2></p>
<p><p>
Ken was the 1977 US Junior co-champion and once held the record of youngest USCF Master since Bobby Fischer. He holds the title of International Master with a rating of 2372. Ken is perhaps the strongest chess player ever with a doctorate in complexity theory.</p>
<p>
He is certainly the world best at <i>both</i> complexity theory and cheating at chess. Ken is one of the leading experts in detecting cheating in games played in real tournaments. </p>
<p>
He has, however, been occupied by a major story that erupted after the world champion, Magnus Carlsen, lost to the American teenager and bottom-rated participant Hans Niemann in the third round of the Sinquefield Cup in St. Louis. The next day, Labor Day, Carlsen abruptly withdrew from the tournament with no explanation beyond a cryptic <a href="https://twitter.com/MagnusCarlsen/status/1566848734616555523">tweet</a>. This was widely regarded as an insinuation of some kind of cheating. Ken was involved daily monitoring the event and was cited in a subsequent <a href="https://grandchesstour.org/blog/2022-sinquefield-cup-chief-arbiter's-statement">press release</a> as having found nothing amiss. </p>
<p>
Nevertheless—really <i>everthemore</i>—this has sparked renewed discussion of cheating at chess and measures to protect tournaments at all levels. Let’s go into that.</p>
<p>
<p><H2> Detecting Cheating </H2></p>
<p><p>
How does one cheat at chess? Imagine Bob is playing a game in a live chess tournament. Bob is a strong player but is not nearly as strong as his opponent Ted. How does Bob cheat?</p>
<p>
<a href="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/two-2/" rel="attachment wp-att-20423"><img data-attachment-id="20423" data-permalink="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/two-2/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/two.jpeg?fit=273%2C184&ssl=1" data-orig-size="273,184" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="two" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/two.jpeg?fit=273%2C184&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/two.jpeg?fit=273%2C184&ssl=1" loading="lazy" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/two.jpeg?resize=273%2C184&ssl=1" alt="" width="273" height="184" class="aligncenter size-full wp-image-20423" data-recalc-dims="1" /></a></p>
<p>
The basic idea is quite simple: Bob uses a computer program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> to make moves for him. He types Ted’s moves into <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> and then makes its moves. The reason this is so powerful is that the ranking of the computer program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> is likely much higher than Ted’s. It could be ranked at 3000 or even higher. This means that Bob is likely to not lose to Ted but perhaps even beat him. </p>
<p>
The challenge for Bob to cheat in this manner is that he must ask the program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> for its moves without being detected. Bob is not allowed to have a digital device like a phone or a laptop to ask the program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> for its next move. This is the challenge that Bob, the cheater, is faced with. He must enter Ted’s last move and then follow <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" />‘s move without it being noticed that he invoked the program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" />. This is the challenge that the cheater must solve.</p>
<p>
The cheater may be able to send the moves to the program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> in various ways. In some cases Bob has been found to use some hidden device to get this information to <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" />. He also may use clever ways to get the moves from <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" />. </p>
<p>
<p><H2> Why Is Detection Hard? </H2></p>
<p><p>
Ken is one of the world’s foremost experts on using predictive analytics to help detect computer-assisted cheating in chess tournaments. Why is this hard? There are several reasons that this is difficult: But the central point is expressed by Alexander <a href="https://en.wikipedia.org/wiki/Alexander_Grischuk">Grischuk</a> who notes that “only a very stupid Bob who stubbornly plays the computer’s first line” is likely to get detected.</p>
<p>
Let’s examine what Grischuk means. Bob as above is trying to use <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" />‘s moves to defeat Ted. Grischuk’s point is that Bob is stupid if he blindly uses the first move that the program <img src="https://s0.wp.com/latex.php?latex=%7BP%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P}" class="latex" /> suggests. Programs often suggest more than one move that is safe to play. This makes detection much harder. </p>
<p>
An even more powerful point is that what if Bob consults more than one program. Perhaps Bob checks the top moves from several programs <img src="https://s0.wp.com/latex.php?latex=%7BP_1%2C+P_2%2C+%5Cdots%2C+P_6%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{P_1, P_2, \dots, P_6}" class="latex" />. This could make the detection of his cheating even more difficult. </p>
<p>
Bob could use similar ideas to make the detection that he is consulting a program even more complicated. This is why Ken’s checking to see if cheating occurred is so difficult. He tries to stay ahead on the detection end. For instance, his model is not predicated on identifying which program was used, and the provisionally-deployed ideas explored with his students <a href="https://rjlipton.wpcomstaging.com/2019/11/29/predicating-predictivity/">here</a> quantify departure from human predictivity apart from any programs.</p>
<p>
Consult <a href="https://katv.com/news/nation-world/chess-grandmaster-accused-of-using-sex-toy-to-cheat-win-against-worlds-top-player-hans-niemann-magnus-carlsen-anal-beads-cheating-ai-artificial-intellegence">this</a> for a recent claim that Niemann used <a href="https://www.cosmopolitan.com/sex-love/a12274254/anal-beads-how-to/">anal beads</a> to signal moves. Even Elon Musk <a href="https://futurism.com/the-byte/elon-musk-sex-toy-chess">raised</a> this possibility. Just an extreme example of why detecting cheating is tough.</p>
<p>
<p><H2> Losing in Translation </H2></p>
<p><p>
The chess story took another twist when Carlsen and Niemann faced each other on Monday in the Julius Baer Generations Cup, an online tournament sponsored by Carlsen’s own organization. Carlsen played one move and then resigned the game—again giving no comment. Much effort has been expended in trying to translate exactly what Carlsen meant by losing in this manner.</p>
<p>
Two years ago, a <a href="https://www.theguardian.com/sport/2020/oct/16/chesss-cheating-crisis-paranoia-has-become-the-culture">story</a> in the <em>Guardian</em> newspaper subtitled “paranoia has become the culture” featured Ken and efforts to avert cheating in tournaments that were moved online on account of the pandemic. Its quoting Ken included an example of translation from English to <em>English</em>:</p>
<blockquote><p><b> </b> <em> “The pandemic has brought me as much work in a single day as I have had in a year previously,” said Prof Kenneth Regan, an international chess master and computer scientist whose model is relied on by the sport’s governing body, <a href="https://www.fide.com">FIDE</a>, to detect suspicious patterns of play. “It has ruined my sabbatical.” </em>
</p></blockquote>
<p><p>
What Ken actually said was, “It ate my sabbatical.” </p>
<p>
Now Ken was mentioned in the <em>Guardian</em> <a href="https://www.theguardian.com/sport/2022/sep/20/carlsen-v-niemann-the-cheating-row-that-is-rocking-chess-explained">yesterday</a> and again <a href="https://www.theguardian.com/sport/2022/sep/21/magnus-carlsen-v-hans-niemann-world-champion-resigns-after-one-move-chess-julius-baer-generation-cup">today</a>. Today’s mention linked a longer <a href="https://en.chessbase.com/post/is-hans-niemann-cheating-world-renowned-expert-ken-regan-analyzes">article</a> on the ChessBase site explaining his methods and conclusions to date. Ken may have more to say after the developments—and ongoing media contacts—settle down. </p>
<p>
<a href="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/kenoffice/" rel="attachment wp-att-20424"><img data-attachment-id="20424" data-permalink="https://rjlipton.wpcomstaging.com/2022/09/21/cheating-at-chess-not-again/kenoffice/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenOffice.jpeg?fit=264%2C191&ssl=1" data-orig-size="264,191" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="KenOffice" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenOffice.jpeg?fit=264%2C191&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenOffice.jpeg?fit=264%2C191&ssl=1" loading="lazy" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2022/09/KenOffice.jpeg?resize=264%2C191&ssl=1" alt="" width="264" height="191" class="aligncenter size-full wp-image-20424" data-recalc-dims="1" /></a></p>
<p>
<p><H2> Open Problems </H2></p>
<p><p>
How will chess come out of the current controversies? I hope Ken had a happy birthday in the meantime.</p>
<p>
<p class="authors">By rjlipton</p>
Richard Liptonhttps://rjlipton.wpcomstaging.comarXiv: Computational Complexity: Capturing Bisimulation-Invariant Exponential-Time Complexity Classeshttp://arxiv.org/abs/2209.103112022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bruse_F/0/1/0/all/0/1">Florian Bruse</a> (University of Kassel, Kassel, Germany), <a href="http://arxiv.org/find/cs/1/au:+Kronenberger_D/0/1/0/all/0/1">David Kronenberger</a> (University of Kassel, Kassel, Germany), <a href="http://arxiv.org/find/cs/1/au:+Lange_M/0/1/0/all/0/1">Martin Lange</a> (University of Kassel, Kassel, Germany)</p><p>Otto's Theorem characterises the bisimulation-invariant PTIME queries over
graphs as exactly those that can be formulated in the polyadic mu-calculus,
hinging on the Immerman-Vardi Theorem which characterises PTIME (over ordered
structures) by First-Order Logic with least fixpoints. This connection has been
extended to characterise bisimulation-invariant EXPTIME by an extension of the
polyadic mu-calculus with functions on predicates, making use of Immerman's
characterisation of EXPTIME by Second-Order Logic with least fixpoints. In this
paper we show that the bisimulation-invariant versions of all classes in the
exponential time hierarchy have logical counterparts which arise as extensions
of the polyadic mu-calculus by higher-order functions. This makes use of the
characterisation of k-EXPTIME by Higher-Order Logic (of order k+1) with least
fixpoints, due to Freire and Martins.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Schema-Based Automata Determinizationhttp://arxiv.org/abs/2209.103122022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Niehren_J/0/1/0/all/0/1">Joachim Niehren</a> (Inria, Université de Lille, France), <a href="http://arxiv.org/find/cs/1/au:+Sakho_M/0/1/0/all/0/1">Momar Sakho</a> (Inria, Université de Lille, France), <a href="http://arxiv.org/find/cs/1/au:+Serhali_A/0/1/0/all/0/1">Antonio Al Serhali</a> (Inria, Université de Lille, France)</p><p>We propose an algorithm for schema-based determinization of finite automata
on words and of step-wise hedge automata on nested words. The idea is to
integrate schema-based cleaning directly into automata determinization. We
prove the correctness of our new algorithm and show that it is alway smore
efficient than standard determinization followed by schema-based cleaning. Our
implementation permits to obtain a small deterministic automaton for an example
of an XPath query, where standard determinization yields a huge stepwise hedge
automaton for which schema-based cleaning runs out of memory.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: BQP is not in NPhttp://arxiv.org/abs/2209.103982022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Librande_J/0/1/0/all/0/1">Jonah Librande</a></p><p>Quantum computers are widely believed have an advantage over classical
computers, and some have even published some empirical evidence that this is
the case. However, these publications do not include a rigorous proof of this
advantage, which would have to minimally state that the class of problems
decidable by a quantum computer in polynomial time, BQP, contains problems that
are not in the class of problems decidable by a classical computer with similar
time bounds, P. Here, I provide the proof of a stronger result that implies
this result: BQP contains problems that lie beyond the much larger classical
computing class NP. This proves that quantum computation is able to efficiently
solve problems which are far beyond the capabilities of classical computers.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Downward Self-Reducibility in TFNPhttp://arxiv.org/abs/2209.105092022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Harsha_P/0/1/0/all/0/1">Prahladh Harsha</a>, <a href="http://arxiv.org/find/cs/1/au:+Mitropolsky_D/0/1/0/all/0/1">Daniel Mitropolsky</a>, <a href="http://arxiv.org/find/cs/1/au:+Rosen_A/0/1/0/all/0/1">Alon Rosen</a></p><p>A problem is \emph{downward self-reducible} if it can be solved efficiently
given an oracle that returns solutions for strictly smaller instances. In the
decisional landscape, downward self-reducibility is well studied and it is
known that all downward self-reducible problems are in \textsc{PSPACE}. In this
paper, we initiate the study of downward self-reducible search problems which
are guaranteed to have a solution -- that is, the downward self-reducible
problems in \textsc{TFNP}. We show that most natural $\PLS$-complete problems
are downward self-reducible and any downward self-reducible problem in
\textsc{TFNP} is contained in \textsc{PLS}. Furthermore, if the downward
self-reducible problem is in \textsc{UTFNP} (i.e. it has a unique solution),
then it is actually contained in \textsc{CLS}. This implies that if integer
factoring is \emph{downward self-reducible} then it is in fact in \textsc{CLS},
suggesting that no efficient factoring algorithm exists using the factorization
of smaller numbers.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: The Dispersive Art Gallery Problemhttp://arxiv.org/abs/2209.102912022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Rieck_C/0/1/0/all/0/1">Christian Rieck</a>, <a href="http://arxiv.org/find/cs/1/au:+Scheffer_C/0/1/0/all/0/1">Christian Scheffer</a></p><p>We introduce a new variant of the art gallery problem that comes from safety
issues. In this variant we are not interested in guard sets of smallest
cardinality, but in guard sets with largest possible distances between these
guards. To the best of our knowledge, this variant has not been considered
before.We call it the Dispersive Art Gallery Problem. In particular, in the
dispersive art gallery problem we are given a polygon $\mathcal{P}$ and a real
number $\ell$, and want to decide whether $\mathcal{P}$ has a guard set such
that every pair of guards in this set is at least a distance of $\ell$ apart.
</p>
<p>In this paper, we study the vertex guard variant of this problem for the
class of polyominoes. We consider rectangular visibility and distances as
geodesics in the $L_1$-metric. Our results are as follows. We give a (simple)
thin polyomino such that every guard set has minimum pairwise distances of at
most $3$. On the positive side, we describe an algorithm that computes guard
sets for simple polyominoes that match this upper bound, i.e., the algorithm
constructs worst-case optimal solutions. We also study the computational
complexity of computing guard sets that maximize the smallest distance between
all pairs of guards within the guard sets. We prove that deciding whether there
exists a guard set realizing a minimum pairwise distance for all pairs of
guards of at least $5$ in a given polyomino is NP-complete.
</p>
<p>We were also able to find an optimal dynamic programming approach that
computes a guard set that maximizes the minimum pairwise distance between
guards in tree-shaped polyominoes, i.e., computes optimal solutions. Because
the shapes constructed in the NP-hardness reduction are thin as well (but have
holes), this result completes the case for thin polyominoes.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Efficient inspection of underground galleries using k robots with limited energyhttp://arxiv.org/abs/2209.104002022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bereg_S/0/1/0/all/0/1">Sergey Bereg</a>, <a href="http://arxiv.org/find/cs/1/au:+Caraballo_L/0/1/0/all/0/1">L. Evaristo Caraballo</a>, <a href="http://arxiv.org/find/cs/1/au:+Diaz_Banez_J/0/1/0/all/0/1">José Miguel Díaz-Báñez</a></p><p>We study the problem of optimally inspecting an underground (underwater)
gallery with k agents. We consider a gallery with a single opening and with a
tree topology rooted at the opening. Due to the small diameter of the pipes
(caves), the agents are small robots with limited autonomy and there is a
supply station at the gallery's opening. Therefore, they are initially placed
at the root and periodically need to return to the supply station. Our goal is
to design off-line strategies to efficiently cover the tree with $k$ small
robots. We consider two objective functions: the covering time (maximum
collective time) and the covering distance (total traveled distance). The
maximum collective time is the maximum time spent by a robot needs to finish
its assigned task (assuming that all the robots start at the same time); the
total traveled distance is the sum of the lengths of all the covering walks.
Since the problems are intractable for big trees, we propose approximation
algorithms. Both efficiency and accuracy of the suboptimal solutions are
empirically showed for random trees through intensive numerical experiments.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: Characterizing the Decidability of Finite State Automata Team Games with Communicationhttp://arxiv.org/abs/2209.103242022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Coulombe_M/0/1/0/all/0/1">Michael Coulombe</a> (Massachusetts Institute of Technology), <a href="http://arxiv.org/find/cs/1/au:+Lynch_J/0/1/0/all/0/1">Jayson Lynch</a> (Cheriton School of Computer Science, University of Waterloo)</p><p>In this paper we define a new model of limited communication for multiplayer
team games of imperfect information. We prove that the Team DFA Game and Team
Formula Game, which have bounded state, remain undecidable when players have a
rate of communication which is less than the rate at which they make moves in
the game. We also show that meeting this communication threshold causes these
games to be decidable.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Parametric Synthesis of Computational Circuits for Complex Quantum Algorithmshttp://arxiv.org/abs/2209.099032022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/quant-ph/1/au:+Pronin_C/0/1/0/all/0/1">Cesar Borisovich Pronin</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Ostroukh_A/0/1/0/all/0/1">Andrey Vladimirovich Ostroukh</a></p><p>At the moment, quantum circuits are created mainly by manually placing logic
elements on lines that symbolize quantum bits. The purpose of creating Quantum
Circuit Synthesizer "Naginata" was due to the fact that even with a slight
increase in the number of operations in a quantum algorithm, leads to the
significant increase in size of the corresponding quantum circuit. This causes
serious difficulties both in creating and debugging these quantum circuits. The
purpose of our quantum synthesizer is enabling users an opportunity to
implement quantum algorithms using higher-level commands. This is achieved by
creating generic blocks for frequently used operations such as: the adder,
multiplier, digital comparator (comparison operator), etc. Thus, the user could
implement a quantum algorithm by using these generic blocks, and the quantum
synthesizer would create a suitable circuit for this algorithm, in a format
that is supported by the chosen quantum computation environment. This approach
greatly simplifies the processes of development and debugging a quantum
algorithm. The proposed approach for implementing quantum algorithms has a
potential application in the field of machine learning, in this regard, we
provided an example of creating a circuit for training a simple neural network.
Neural networks have a significant impact on the technological development of
the transport and road complex, and there is a potential for improving the
reliability and efficiency of their learning process by utilizing quantum
computation, through the introduction of quantum computing.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Exact and Sampling Methods for Mining Higher-Order Motifs in Large Hypergraphshttp://arxiv.org/abs/2209.102412022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Lotito_Q/0/1/0/all/0/1">Quintino Francesco Lotito</a>, <a href="http://arxiv.org/find/cs/1/au:+Musciotto_F/0/1/0/all/0/1">Federico Musciotto</a>, <a href="http://arxiv.org/find/cs/1/au:+Battiston_F/0/1/0/all/0/1">Federico Battiston</a>, <a href="http://arxiv.org/find/cs/1/au:+Montresor_A/0/1/0/all/0/1">Alberto Montresor</a></p><p>Network motifs are patterns of interactions occurring among a small set of
nodes in a graph. They highlight fundamental aspects of the interplay between
the topology and the dynamics of complex networks and have a wide range of
real-world applications. Motif analysis has been extended to a variety of
network models that allow for a richer description of the interactions of a
system, including weighted, temporal, multilayer, and, more recently,
higher-order networks. Generalizing network motifs to capture patterns of group
interactions is not only interesting from the fundamental perspective of
understanding complex systems, but also proposes unprecedented computational
challenges. In this work, we focus on the problem of counting occurrences of
sub-hypergraph patterns in very large higher-order networks. We show that, by
directly exploiting higher-order structures, we speed up the counting process
compared to applying traditional data mining techniques for network motifs.
Moreover, by including hyperedge sampling techniques, computational complexity
is further reduced at the cost of small errors in the estimation of motif
frequency. We evaluate our algorithms on several real-world datasets describing
face-to-face interactions, co-authorship and human communication. We show that
our approximated algorithm not only allows to speed up the performance, but
also to extract larger higher-order motifs beyond the computational limits of
an exact approach.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: On Reachable Assignments under Dichotomous Preferenceshttp://arxiv.org/abs/2209.102622022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Ito_T/0/1/0/all/0/1">Takehiro Ito</a>, <a href="http://arxiv.org/find/cs/1/au:+Kakimura_N/0/1/0/all/0/1">Naonori Kakimura</a>, <a href="http://arxiv.org/find/cs/1/au:+Kamiyama_N/0/1/0/all/0/1">Naoyuki Kamiyama</a>, <a href="http://arxiv.org/find/cs/1/au:+Kobayashi_Y/0/1/0/all/0/1">Yusuke Kobayashi</a>, <a href="http://arxiv.org/find/cs/1/au:+Nozaki_Y/0/1/0/all/0/1">Yuta Nozaki</a>, <a href="http://arxiv.org/find/cs/1/au:+Okamoto_Y/0/1/0/all/0/1">Yoshio Okamoto</a>, <a href="http://arxiv.org/find/cs/1/au:+Ozeki_K/0/1/0/all/0/1">Kenta Ozeki</a></p><p>We consider the problem of determining whether a target item assignment can
be reached from an initial item assignment by a sequence of pairwise exchanges
of items between agents. In particular, we consider the situation where each
agent has a dichotomous preference over the items, that is, each agent
evaluates each item as acceptable or unacceptable. Furthermore, we assume that
communication between agents is limited, and the relationship is represented by
an undirected graph. Then, a pair of agents can exchange their items only if
they are connected by an edge and the involved items are acceptable. We prove
that this problem is PSPACE-complete even when the communication graph is
complete (that is, every pair of agents can exchange their items), and this
problem can be solved in polynomial time if an input graph is a tree.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Improved Approximation for Two-Edge-Connectivityhttp://arxiv.org/abs/2209.102652022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Garg_M/0/1/0/all/0/1">Mohit Garg</a>, <a href="http://arxiv.org/find/cs/1/au:+Grandoni_F/0/1/0/all/0/1">Fabrizio Grandoni</a>, <a href="http://arxiv.org/find/cs/1/au:+Ameli_A/0/1/0/all/0/1">Afrouz Jabal Ameli</a></p><p>The basic goal of survivable network design is to construct low-cost networks
which preserve a sufficient level of connectivity despite the failure or
removal of a few nodes or edges. One of the most basic problems in this area is
the $2$-Edge-Connected Spanning Subgraph problem (2-ECSS): given an undirected
graph $G$, find a $2$-edge-connected spanning subgraph $H$ of $G$ with the
minimum number of edges (in particular, $H$ remains connected after the removal
of one arbitrary edge).
</p>
<p>2-ECSS is NP-hard and the best-known (polynomial-time) approximation factor
for this problem is $4/3$. Interestingly, this factor was achieved with
drastically different techniques by [Hunkenschr{\"o}der, Vempala and Vetta
'00,'19] and [Seb{\"o} and Vygen, '14]. In this paper we present an improved
$\frac{118}{89}+\epsilon<1.326$ approximation for 2-ECSS.
</p>
<p>The key ingredient in our approach (which might also be helpful in future
work) is a reduction to a special type of structured graphs: our reduction
preserves approximation factors up to $6/5$. While reducing to
2-vertex-connected graphs is trivial (and heavily used in prior work), our
structured graphs are "almost" 3-vertex-connected: more precisely, given any
2-vertex-cut $\{u,v\}$ of a structured graph $G=(V,E)$, $G[V\setminus \{u,v\}]$
has exactly 2 connected components, one of which contains exactly one node of
degree $2$ in $G$.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Avoid One's Doom: Finding Cliff-Edge Configurations in Petri Netshttp://arxiv.org/abs/2209.103232022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Aguirre_Samboni_G/0/1/0/all/0/1">Giann Karlo Aguirre-Samboní</a> (INRIA and LMF, CNRS and ENS Paris-Saclay, Université Paris-Saclay), <a href="http://arxiv.org/find/cs/1/au:+Haar_S/0/1/0/all/0/1">Stefan Haar</a> (INRIA and LMF, CNRS and ENS Paris-Saclay, Université Paris-Saclay), <a href="http://arxiv.org/find/cs/1/au:+Pauleve_L/0/1/0/all/0/1">Loïc Paulevé</a> (Univ. Bordeaux, Bordeaux INP, CNRS, LaBRI, UMR5800), <a href="http://arxiv.org/find/cs/1/au:+Schwoon_S/0/1/0/all/0/1">Stefan Schwoon</a> (INRIA and LMF, CNRS and ENS Paris-Saclay, Université Paris-Saclay), <a href="http://arxiv.org/find/cs/1/au:+Wurdemann_N/0/1/0/all/0/1">Nick Würdemann</a> (Department of Computing Science, University of Oldenburg)</p><p>A crucial question in analyzing a concurrent system is to determine its
long-run behaviour, and in particular, whether there are irreversible choices
in its evolution, leading into parts of the reachability space from which there
is no return to other parts. Casting this problem in the unifying framework of
safe Petri nets, our previous work has provided techniques for identifying
attractors, i.e. terminal strongly connected components of the reachability
space, whose attraction basins we wish to determine. Here, we provide a
solution for the case of safe Petri nets. Our algorithm uses net unfoldings and
provides a map of all of the system's configurations (concurrent executions)
that act as cliff-edges, i.e. any maximal extension for those configurations
lies in some basin that is considered fatal. The computation turns out to
require only a relatively small prefix of the unfolding, just twice the depth
of Esparza's complete prefix.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Quasipolynomial-time algorithms for repulsive Gibbs point processeshttp://arxiv.org/abs/2209.104532022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Jenssen_M/0/1/0/all/0/1">Matthew Jenssen</a>, <a href="http://arxiv.org/find/cs/1/au:+Michelen_M/0/1/0/all/0/1">Marcus Michelen</a>, <a href="http://arxiv.org/find/cs/1/au:+Ravichandran_M/0/1/0/all/0/1">Mohan Ravichandran</a></p><p>We demonstrate a quasipolynomial-time deterministic approximation algorithm
for the partition function of a Gibbs point process interacting via a repulsive
potential. This result holds for all activities $\lambda$ for which the
partition function satisfies a zero-free assumption in a neighborhood of the
interval $[0,\lambda]$. As a corollary, we obtain a quasipolynomial-time
deterministic approximation algorithm for all $\lambda < e/\Delta_\phi$, where
$\Delta_\phi$ is the potential-weighted connective constant of the potential
$\phi$. Our algorithm approximates coefficients of the cluster expansion of the
partition function and uses the interpolation method of Barvinok to extend this
approximation throughout the zero-free region.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Chaining, Group Leverage Score Overestimates, and Fast Spectral Hypergraph Sparsificationhttp://arxiv.org/abs/2209.105392022-09-22T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Jambulapati_A/0/1/0/all/0/1">Arun Jambulapati</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yang P. Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Sidford_A/0/1/0/all/0/1">Aaron Sidford</a></p><p>We present an algorithm that given any $n$-vertex, $m$-edge, rank $r$
hypergraph constructs a spectral sparsifier with $O(n \varepsilon^{-2} \log n
\log r)$ hyperedges in nearly-linear $\widetilde{O}(mr)$ time. This improves in
both size and efficiency over a line of work (Bansal-Svensson-Trevisan 2019,
Kapralov-Krauthgamer-Tardos-Yoshida 2021) for which the previous best size was
$O(\min\{n \varepsilon^{-4} \log^3 n,nr^3 \varepsilon^{-2} \log n\})$ and
runtime was $\widetilde{O}(mr + n^{O(1)})$.
</p>
<p>Independent Result: In an independent work, Lee (Lee 2022) also shows how to
compute a spectral hypergraph sparsifier with $O(n \varepsilon^{-2} \log n \log
r)$ hyperedges.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentComputational Complexity: POSTED UPDATED VERSION OF Computers and Intractability: A guide to Algorithmic Lower Bounds posted (New title)tag:blogger.com,1999:blog-3722233.post-66733999993584909802022-09-21T20:48:00+00:00
<p>We have posted a revised version of </p><p><br /></p><p><i>Computational Intractability: A Guide to Algorithmic Lower Bounds</i></p><p>by Demaine-Gasarch-Hajiaghayi</p><p>The book is <a href="https://hardness.mit.edu/">here</a>.</p><p>(For the original post about it, edited it to use the new title (see below), see <a href="https://blog.computationalcomplexity.org/2022/08/computers-and-intractability-guide-to.html">HERE</a>.) </p><p><br /></p><p>We <i>changed the title</i> (the title above is the new one) </p><p>since the earlier title looked <i>too much</i></p><p>like the title of Garey's and Johnson's classic. While that was intentional we </p><p>later felt that it was <i>too close</i> to their title and might cause confusion. </p><p>Of course changing the title might <i>also</i> cause confusion; however, </p><p>this post (and we will email various people as well) will stem that confusion. </p><p><br /></p><p>We welcome corrections, suggestions and comments on the book. Email us at <a href="mailto:hardness-book@mit.edu">hardness-book@mit.edu</a></p><p><br /></p><p class="authors">By gasarch</p>
Computational Complexityhttp://blog.computationalcomplexity.org/David Eppstein: Counting paths in convex polygonshttps://11011110.github.io/blog/2022/09/21/counting-paths-convex2022-09-21T17:32:00+00:00
<p>Let’s count non-crossing paths through the all points of a convex polygon.
There is a very simple formula for this, \(n2^{n-3}\) undirected paths through an \(n\)-gon, but why? Here’s a simple coloring-based argument that immediately gives this formula.</p>
<p>Choose a coloring for the points of the polygon, red and blue, and choose a starting point for the path. Build a path, starting from this point, by the following rule: if you are at a red point, go to the next available point clockwise, and if you are at a blue point, go to the next available point counterclockwise.</p>
<p style="text-align:center"><img src="/blog/assets/2022/colored-ham.svg" alt="Generating a non-crossing path through all points of a convex polygon, by using a 2-coloring of the points to determing the direction of each step" /></p>
<p>There are \(n2^n\) choices of starting point and coloring, but each path is counted eight times, because the colors of the last two points on the path don’t make a difference to where it goes, and because each path is also traced in the opposite direction using the other end as its starting point. Dividing \(n2^n\) by eight gives the formula.</p>
<p>This same idea also works to count non-crossing paths that are allowed to skip some of the points of the polygon. Now, color each point red, blue, or yellow. Use the same rule for building a path, but ignore the yellow points: start on a red or blue point, and when searching for an available point only go to another red or blue point.</p>
<p style="text-align:center"><img src="/blog/assets/2022/colored-path.svg" alt="Generating a non-crossing path through some points of a convex polygon, by using a 3-coloring of the points to determing the direction of each step" /></p>
<p>There are \(3^n\) choices of coloring. They have different numbers of choices of starting point, but by cyclically permuting the colors you can group them into \(3^{n-1}\) triples of colorings that together have exactly \(2n\) available (non-yellow) starting points. Each path is counted eight times just like before, so this argument would seem to give the formula \(2n\cdot 3^{n-1} / 8\) for the number of paths. But it’s not quite right. For one thing, it’s not even an integer.</p>
<p>The problem is, what happens when you color all but one of the points yellow, and that one remaining point red or blue? You get a sequence of one point only: does that count as a path? If we count these as length-zero paths (as I would prefer), then they are undercounted, because they do not have two ends, and they only have one point whose coloring (red or blue) is irrelevant, rather than the usual two points. When we divide by eight we make their contribution too small. If we don’t count them (as <a href="http://oeis.org/A261064">OEIS tells me</a> was the definition used in a 2020 Bulgarian mathematics contest) then they are overcounted, because they contribute to the formula and shouldn’t.</p>
<p>Adjusting for these one-point paths gives two alternative formulas:</p>
\[\frac{n}{4}(3^{n-1}+3)\]
<p>if we are counting one-point zero-length paths, or</p>
\[\frac{n}{4}(3^{n-1}-1),\]
<p>the formula from OEIS, if we are not counting them.</p>
<p>(<a href="https://mathstodon.xyz/@11011110/109039377346914779">Discuss on Mastodon</a>)</p><p class="authors">By David Eppstein</p>
David Eppsteinhttps://11011110.github.io/blog/CCI: jobs: postdoc at TU Eindhoven, University of Amsterdam, Leiden University, CWI (apply by October 31, 2022)http://cstheory-jobs.org/2022/09/21/postdoc-at-tu-eindhoven-university-of-amsterdam-leiden-university-cwi-apply-by-october-31-2022/2022-09-21T15:20:55+00:00
<p>The NETWORKS project is a collaboration of world-leading researchers from four institutions in The Netherlands: TU Eindhoven, University of Amsterdam, Leiden University and CWI. Research in NETWORKS focuses on stochastics and algorithmics for network problems. Would you like to become a postdoc in the NETWORKS project? Then we invite you to apply for one of these positions.</p>
<p>Website: <a href="https://www.thenetworkcenter.nl/Open-Positions/openposition/30/8-Postdoctoral-fellows-in-Stochastics-and-Algorithmics-COFUND-">https://www.thenetworkcenter.nl/Open-Positions/openposition/30/8-Postdoctoral-fellows-in-Stochastics-and-Algorithmics-COFUND-</a><br />
Email: info@thenetworkcenter.nl</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgTCS+ Seminar Series: TCS+ talk: Wednesday, September 28 — Joakim Blikstad, KTH Stockholmhttp://tcsplus.wordpress.com/?p=6382022-09-21T13:25:44+00:00
<p></p>
<p>The next TCS+ talk will take place this coming Wednesday, September 28th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). <strong>Joakim Blikstad</strong> from KTH Stockholm will speak about “<em>Nearly Optimal Communication and Query Complexity of Bipartite Matching</em>” (abstract below).</p>
<p>You can reserve a spot as an individual or a group to join us live by signing up on <a href="https://sites.google.com/view/tcsplus/welcome/next-tcs-talk">the online form</a>. Registration is <em>not</em> required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted <a href="https://sites.google.com/view/tcsplus/welcome/past-talks">on our website</a> afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to <a href="https://sites.google.com/view/tcsplus/welcome/suggest-a-talk">suggest</a> a possible topic or speaker, please see <a href="https://sites.google.com/view/tcsplus/">the website</a>.</p>
<blockquote class="wp-block-quote">
<p>Abstract: With a simple application of the cutting planes method, we settle the complexities of the bipartite maximum matching problem (BMM) up to poly-logarithmic factors in five models of computation: the two-party communication, AND query, OR query, XOR query, and quantum edge query models. Our results answer open problems that have been raised repeatedly since at least three decades ago [Hajnal, Maass, and Turan STOC’88; Ivanyos, Klauck, Lee, Santha, and de Wolf FSTTCS’12; Dobzinski, Nisan, and Oren STOC’14; Nisan SODA’21] and tighten the lower bounds shown by Beniamini and Nisan [STOC’21] and Zhang [ICALP’04]. Our communication protocols also work for some generalizations of BMM, such as maximum-cost bipartite b-matching and transshipment, using only Õ(|V|) bits of communications.</p>
<p>To appear in FOCS’22. Joint work with Jan van den Brand, Yuval Efron, Danupon Nanongkai, and Sagnik Mukhopadhyay. preprint: <a href="https://arxiv.org/abs/2208.02526">https://arxiv.org/abs/2208.02526</a></p>
</blockquote><p class="authors">By plustcs</p>
TCS+ Seminar Serieshttps://tcsplus.wordpress.comCCI: jobs: Teaching professor at UC San Diego (apply by October 15, 2022)http://cstheory-jobs.org/2022/09/21/teaching-professor-at-uc-san-diego-apply-by-october-15-2022/2022-09-21T06:20:52+00:00
<p>UC San Diego Computer Science department seeks applications for an Assistant Teaching Professor. Teaching Professors are full members of the academic senate and are eligible for Security of Employment, analogous to tenure. Teaching Professors have an increased emphasis on teaching, while maintaining an active program of research, in their research area and/or education.</p>
<p>Website: <a href="https://apol-recruit.ucsd.edu/JPF03253">https://apol-recruit.ucsd.edu/JPF03253</a><br />
Email: shachar.lovett@gmail.com</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgarXiv: Computational Complexity: Intrinsic Simulations and Universality in Automata Networkshttp://arxiv.org/abs/2209.095272022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Rios_Wilson_M/0/1/0/all/0/1">Martín Ríos-Wilson</a>, <a href="http://arxiv.org/find/cs/1/au:+Theyssier_G/0/1/0/all/0/1">Guillaume Theyssier</a> (I2M)</p><p>An automata network (AN) is a finite graph where each node holds a state from
a finite alphabet and is equipped with a local map defining the evolution of
the state of the node depending on its neighbors. They are studied both from
the dynamical and the computational complexity point of view. Inspired from
well-established notions in the context of cellular automata, we develop a
theory of intrinsic simulations and universality for families of automata
networks. We establish many consequences of intrinsic universality in terms of
complexity of orbits (periods of attractors, transients, etc) as well as
hardness of the standard well-studied decision problems for automata networks
(short/long term prediction, reachability, etc). In the way, we prove
orthogonality results for these problems: the hardness of a single one does not
imply hardness of the others, while intrinsic universality implies hardness of
all of them. As a complement, we develop a proof technique to establish
intrinsic simulation and universality results which is suitable to deal with
families of symmetric networks were connections are non-oriented. It is based
on an operation of glueing of networks, which allows to produce complex orbits
in large networks from compatible pseudo-orbits in small networks. As an
illustration, we give a short proof that the family of networks were each node
obeys the rule of the 'game of life' cellular automaton is strongly universal.
This formalism and proof technique is also applied in a companion paper devoted
to studying the effect of update schedules on intrinsic universality for
concrete symmetric families of automata networks.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: VEST is W[2]-hardhttp://arxiv.org/abs/2209.097882022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Skotnica_M/0/1/0/all/0/1">Michael Skotnica</a></p><p>In this short note, we show that the problem of VEST is $W[2]$-hard for
parameter $k$. This strengthens a result of Matou\v{s}ek, who showed
$W[1]$-hardness of that problem. The consequence of this result is that
computing the $k$-th homotopy group of a $d$-dimensional space for $d > 3$ is
$W[2]$-hard for parameter $k$.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: A tight bound for the number of edges of matchstick graphshttp://arxiv.org/abs/2209.098002022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+Lavollee_J/0/1/0/all/0/1">Jérémy Lavollée</a>, <a href="http://arxiv.org/find/math/1/au:+Swanepoel_K/0/1/0/all/0/1">Konrad Swanepoel</a></p><p>A matchstick graph is a plane graph with edges drawn as unit-distance line
segments. Harborth introduced these graphs in 1986 and conjectured that the
maximum number of edges for a matchstick graph on $n$ vertices is $\lfloor
3n-\sqrt{12n-3} \rfloor$. In this paper we prove this conjecture for all $n\geq
1$. The main geometric ingredient of the proof is an isoperimetric inequality
related to Lhuilier's inequality.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: Natural Wave Numbers, Natural Wave Co-numbers, and the Computation of the Primeshttp://arxiv.org/abs/2209.093132022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Smith_T/0/1/0/all/0/1">Terence R. Smith</a></p><p>The paper exploits an isomorphism between the natural numbers N and a space U
of periodic sequences of the roots of unity in constructing a recursive
procedure for representing and computing the prime numbers. The nth wave number
${\bf u}_n$ is the countable sequence of the nth roots of unity having
frequencies k/n for all integer phases k. The space U is closed under a
commutative and associative binary operation ${\bf u}_m \odot{\bf u}_n={\bf
u}_{mn}$, termed the circular product, and is isomorphic with N under their
respective product operators. Functions are defined on U that partition wave
numbers into two complementary sequences, of which the co-number $ {\overset
{\bf \ast }{ \bf u}}_n$ is a function of a wave number in which zeros replace
its positive roots of unity. The recursive procedure $ {\overset {\bf \ast }{
\bf U}}_{N+1}= {\overset {\bf \ast }{ \bf U}}_{N}\odot{\overset {\bf \ast }{\bf
u}}_{{N+1}}$ represents prime numbers explicitly in terms of preceding prime
numbers, starting with $p_1=2$, and is shown never to terminate. If ${p}_1, ...
, { p}_{N+1}$ are the first $N+1$ prime phases, then the phases in the range
$p_{N+1} \leq k < p^2_{N+1}$ that are associated with the non-zero terms of $
{\overset {\bf \ast }{\bf U}}_{N}$ are, together with $ p_1, ...,p_N$, all of
the prime phases less than $p^2_{N+1}$. When applied with all of the primes
identified at the previous step, the recursive procedure identifies
approximately $7^{2(N-1)}/(2(N-1)ln7)$ primes at each iteration for $ N>1$.
When the phases of wave numbers are represented in modular arithmetic, the
prime phases are representable in terms of sums of reciprocals of the initial
set of prime phases and have a relation with the zeta-function.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Data structures for topologically sound higher-dimensional diagram rewritinghttp://arxiv.org/abs/2209.095092022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+Hadzihasanovic_A/0/1/0/all/0/1">Amar Hadzihasanovic</a>, <a href="http://arxiv.org/find/math/1/au:+Kessler_D/0/1/0/all/0/1">Diana Kessler</a></p><p>We present a computational implementation of diagrammatic sets, a model of
higher-dimensional diagram rewriting that is "topologically sound": diagrams
admit a functorial interpretation as homotopies in cell complexes. This has
potential applications both in the formalisation of higher algebra and category
theory and in computational algebraic topology. We describe data structures for
well-formed shapes of diagrams of arbitrary dimensions and provide a solution
to their isomorphism problem in time $O(n^3 \log n)$. On top of this, we define
a type theory for rewriting in diagrammatic sets and provide a semantic
characterisation of its syntactic category. All data structures and algorithms
are implemented in the Python library rewalt, which also supports various
visualisations of diagrams.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Exact Matching and the Top-k Perfect Matching Problemhttp://arxiv.org/abs/2209.096612022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Maalouly_N/0/1/0/all/0/1">Nicolas El Maalouly</a>, <a href="http://arxiv.org/find/cs/1/au:+Wulf_L/0/1/0/all/0/1">Lasse Wulf</a></p><p>The aim of this note is to provide a reduction of the Exact Matching problem
to the Top-$k$ Perfect Matching Problem. Together with earlier work by El
Maalouly, this shows that the two problems are polynomial-time equivalent.
</p>
<p>The Exact Matching Problem is a well-known 40 years old problem for which a
randomized, but no deterministic poly-time algorithm has been discovered. The
Top-$k$ Perfect Matching Problem is the problem of finding a perfect matching
which maximizes the total weight of the $k$ heaviest edges contained in it.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Maximizing a Submodular Function with Bounded Curvature under an Unknown Knapsack Constrainthttp://arxiv.org/abs/2209.096682022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Klimm_M/0/1/0/all/0/1">Max Klimm</a>, <a href="http://arxiv.org/find/cs/1/au:+Knaack_M/0/1/0/all/0/1">Martin Knaack</a></p><p>This paper studies the problem of maximizing a monotone submodular function
under an unknown knapsack constraint. A solution to this problem is a policy
that decides which item to pack next based on the past packing history. The
robustness factor of a policy is the worst case ratio of the solution obtained
by following the policy and an optimal solution that knows the knapsack
capacity. We develop an algorithm with a robustness factor that is decreasing
in the curvature $B$ of the submodular function. For the extreme cases $c=0$
corresponding to a modular objective, it matches a previously known and best
possible robustness factor of $1/2$. For the other extreme case of $c=1$ it
yields a robustness factor of $\approx 0.35$ improving over the best previously
known robustness factor of $\approx 0.06$.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Development of a Parallel BAT and Its Applications in Binary-state Network Reliability Problemshttp://arxiv.org/abs/2209.097112022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Yeh_W/0/1/0/all/0/1">Wei-Chang Yeh</a></p><p>Various networks are broadly and deeply applied in real-life applications.
Reliability is the most important index for measuring the performance of all
network types. Among the various algorithms, only implicit enumeration
algorithms, such as depth-first-search, breadth-search-first, universal
generating function methodology, binary-decision diagram, and
binary-addition-tree algorithm (BAT), can be used to calculate the exact
network reliability. However, implicit enumeration algorithms can only be used
to solve small-scale network reliability problems. The BAT was recently
proposed as a simple, fast, easy-to-code, and flexible make-to-fit
exact-solution algorithm. Based on the experimental results, the BAT and its
variants outperformed other implicit enumeration algorithms. Hence, to overcome
the above-mentioned obstacle as a result of the size problem, a new parallel
BAT (PBAT) was proposed to improve the BAT based on compute multithread
architecture to calculate the binary-state network reliability problem, which
is fundamental for all types of network reliability problems. From the analysis
of the time complexity and experiments conducted on 20 benchmarks of
binary-state network reliability problems, PBAT was able to efficiently solve
medium-scale network reliability problems.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Modeling the Small-World Phenomenon with Road Networkshttp://arxiv.org/abs/2209.098882022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Goodrich_M/0/1/0/all/0/1">Michael T. Goodrich</a>, <a href="http://arxiv.org/find/cs/1/au:+Ozel_E/0/1/0/all/0/1">Evrim Ozel</a></p><p>Dating back to two famous experiments by the social-psychologist, Stanley
Milgram, in the 1960s, the small-world phenomenon is the idea that all people
are connected through a short chain of acquaintances that can be used to route
messages. Many subsequent papers have attempted to model this phenomenon, with
most concentrating on the "short chain" of acquaintances rather than their
ability to efficiently route messages. In this paper, we study the small-world
navigability of the U.S. road network, with the goal of providing a model that
explains how messages in the original small-world experiments could be routed
along short paths using U.S. roads. To this end, we introduce the Neighborhood
Preferential Attachment model, which combines elements from Kleinberg's model
and the Barab\'asi-Albert model, such that long-range links are chosen
according to both the degrees and (road-network) distances of vertices in the
network. We empirically evaluate all three models by running a decentralized
routing algorithm, where each vertex only has knowledge of its own neighbors,
and find that our model outperforms both of these models in terms of the
average hop length. Moreover, our experiments indicate that similar to the
Barab\'asi-Albert model, networks generated by our model are scale-free, which
could be a more realistic representation of acquaintanceship links in the
original small-world experiment.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: On the Correlation Gap of Matroidshttp://arxiv.org/abs/2209.098962022-09-21T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+Husic_E/0/1/0/all/0/1">Edin Husić</a>, <a href="http://arxiv.org/find/math/1/au:+Koh_Z/0/1/0/all/0/1">Zhuan Khye Koh</a>, <a href="http://arxiv.org/find/math/1/au:+Loho_G/0/1/0/all/0/1">Georg Loho</a>, <a href="http://arxiv.org/find/math/1/au:+Vegh_L/0/1/0/all/0/1">László A. Végh</a></p><p>A set function can be extended to the unit cube in various ways; the
correlation gap measures the ratio between two natural extensions. This
quantity has been identified as the performance guarantee in a range of
approximation algorithms and mechanism design settings. It is known that the
correlation gap of a monotone submodular function is $1-1/e$, and this is tight
even for simple matroid rank functions.
</p>
<p>We initiate a fine-grained study of correlation gaps of matroid rank
functions. In particular, we present improved lower bounds on the correlation
gap as parametrized by the rank and the girth of the matroid. We also show that
the worst correlation gap of a weighted matroid rank function is achieved under
uniform weights. Such improved lower bounds have direct applications for
submodular maximization under matroid constraints, mechanism design, and
contention resolution schemes. Previous work relied on implicit correlation gap
bounds for problems such as list decoding and approval voting.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentECCC Papers: TR22-133 | Downward Self-Reducibility in TFNP |
Prahladh Harsha,
Daniel Mitropolsky,
Alon Rosenhttps://eccc.weizmann.ac.il/report/2022/1332022-09-20T18:28:17+00:00
A problem is downward self-reducible if it can be solved efficiently given an oracle that returns
solutions for strictly smaller instances. In the decisional landscape, downward self-reducibility is
well studied and it is known that all downward self-reducible problems are in PSPACE. In this
paper, we initiate the study of downward self-reducible search problems which are guaranteed to
have a solution — that is, the downward self-reducible problems in TFNP. We show that most
natural PLS-complete problems are downward self-reducible and any downward self-reducible
problem in TFNP is contained in PLS. Furthermore, if the downward self-reducible problem
is in UTFNP (i.e. it has a unique solution), then it is actually contained in CLS. This implies
that if integer factoring is downward self-reducible then it is in fact in CLS, suggesting that no
efficient factoring algorithm exists using the factorization of smaller numbers.
ECCC Papershttps://eccc.weizmann.ac.il/arXiv: Computational Complexity: Better Hardness Results for the Minimum Spanning Tree Congestion Problemhttp://arxiv.org/abs/2209.082192022-09-20T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Luu_H/0/1/0/all/0/1">Huong Luu</a>, <a href="http://arxiv.org/find/cs/1/au:+Chrobak_M/0/1/0/all/0/1">Marek Chrobak</a></p><p>In the spanning tree congestion problem, given a connected graph $G$, the
objective is to compute a spanning tree $T$ in $G$ for which the maximum edge
congestion is minimized, where the congestion of an edge $e$ of $T$ is the
number of vertex pairs adjacent in $G$ for which the path connecting them in
$T$ traverses $e$. The problem is known to be NP-hard, but its approximability
is still poorly understood, and it is not even known whether the optimum can be
efficiently approximated with ratio $o(n)$. In the decision version of this
problem, denoted STC-$K$, we need to determine if $G$ has a spanning tree with
congestion at most $K$. It is known that STC-$K$ is NP-complete for $K\ge 8$,
and this implies a lower bound of $1.125$ on the approximation ratio of
minimizing congestion. On the other hand, $3$-STC can be solved in polynomial
time, with the complexity status of this problem for $K\in \{4,5,6,7\}$
remaining an open problem. We substantially improve the earlier hardness result
by proving that STC-$K$ is NP-complete for $K\ge 5$. This leaves only the case
$K=4$ open, and improves the lower bound on the approximation ratio to $1.2$.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errorshttp://arxiv.org/abs/2209.086882022-09-20T00:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Block_A/0/1/0/all/0/1">Alex Block</a>, <a href="http://arxiv.org/find/cs/1/au:+Blocki_J/0/1/0/all/0/1">Jeremiah Blocki</a>, <a href="http://arxiv.org/find/cs/1/au:+Cheng_K/0/1/0/all/0/1">Kuan Cheng</a>, <a href="http://arxiv.org/find/cs/1/au:+Grigorescu_E/0/1/0/all/0/1">Elena Grigorescu</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_X/0/1/0/all/0/1">Xin Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Zheng_Y/0/1/0/all/0/1">Yu Zheng</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhu_M/0/1/0/all/0/1">Minshen Zhu</a></p><p>Locally Decodable Codes (LDCs) are error-correcting codes
$C:\Sigma^n\rightarrow \Sigma^m$ with super-fast decoding algorithms. They are
important mathematical objects in many areas of theoretical computer science,
yet the best constructions so far have codeword length $m$ that is
super-polynomial in $n$, for codes with constant query complexity and constant
alphabet size. In a very surprising result, Ben-Sasson et al. showed how to
construct a relaxed version of LDCs (RLDCs) with constant query complexity and
almost linear codeword length over the binary alphabet, and used them to obtain
significantly-improved constructions of Probabilistically Checkable Proofs. In
this work, we study RLDCs in the standard Hamming-error setting, and introduce
their variants in the insertion and deletion (Insdel) error setting. Insdel
LDCs were first studied by Ostrovsky and Paskin-Cherniavsky, and are further
motivated by recent advances in DNA random access bio-technologies, in which
the goal is to retrieve individual files from a DNA storage database. Our first
result is an exponential lower bound on the length of Hamming RLDCs making 2
queries, over the binary alphabet. This answers a question explicitly raised by
Gur and Lachish. Our result exhibits a "phase-transition"-type behavior on the
codeword length for constant-query Hamming RLDCs. We further define two
variants of RLDCs in the Insdel-error setting, a weak and a strong version. On
the one hand, we construct weak Insdel RLDCs with with parameters matching
those of the Hamming variants. On the other hand, we prove exponential lower
bounds for strong Insdel RLDCs. These results demonstrate that, while these
variants are equivalent in the Hamming setting, they are significantly
different in the insdel setting. Our results also prove a strict separation
between Hamming RLDCs and Insdel RLDCs.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recent