Theory of Computing ReportPluto 1.6.2 on Ruby 3.0.6 (2023-03-30) [x86_64-linux]Computational Complexity: Where do Non-Primitive Recursive Functions come up NATURALLY?tag:blogger.com,1999:blog-3722233.post-34717252135724793332023-12-04T01:41:00+00:00
<p>The following is a conversation between Clyde Kruskal and Bill Gasarch.</p><p>CLYDE: Bill, a student, Ian Roberts, asked me if there are any non-primitive recursive functions that people actually want to compute.</p><p>BILL: Off hand I would say no, but non-prim rec functions DO come up in natural ways.</p><p>CLYDE: That's not what he asked.</p><p>BILL: Even so, that's what I will blog about. OH, one more thing, why does he want to know?</p><p>CLYDE: Ask him directly. (BILL then emailed Ian)</p><p>BILL: Good question about NATURAL problems that are not prim-rec, but why do you want to know?</p><p>IAN: Meyer and Richie proved (see <a href="https://dl.acm.org/doi/10.1145/800196.806014">here</a>) that if you limit the control flow to IF statements, FOR loops with finite iterators then the class of functions you implement is exactly the primitive recursive functions. So I was wondering if I could avoid ever using WHILE loops since they are harder to reason about. </p><p>BILL: YES you COULD avoid ever using WHILE LOOPS; however, there are times when using them is the best way to go.</p><p>CLYDE: Bill, when is the last time you wrote a program? And LaTex does not count. </p><p>BILL: Good point. Rather than take my word for it, let's ASK my readers. I'll add that to my list of RANDOM THOUGHTS ABOUT non-prim rec functions. </p><p>SO, random thoughts on non-prim rec functions</p><p>0) Are there problems for which writing a WHILE loop is the way to go even though they are not needed? </p><p>1) HALT is not prim recursive and we want to compute it. Oh well. All future examples will be computable. <br /></p><p>2) QUESTION: Are there simple programming languages so that HALT restricted to them is decidable but not primitive recursive? I suspect one could contrive such a language so I ask for both natural and contrived examples. </p><p>3) The Paris-Harrington Numbers from Ramsey Theory are computable and grow MUCH faster than prim rec. Indeed- they grow much faster than Ackermann's function. See <a href="https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theorem">Wikipedia Entry</a>.</p><p>4) The Kanamori-McAloon Theorem from Ramsey theory is computable and grow MUCH faster than prim rec. Indeed- they grow much faster than Ackemann's function. See <a href="https://en.wikipedia.org/wiki/Kanamori%E2%80%93McAloon_theorem">Wikipedia Entry</a>. They are not as well known as the Paris-Harrington numbers. Hopefully this blog post will help that. </p><p>5) Goodstein's Theorem yields numbers that are computable and grow MUCH faster than prim rec. Indeed, they grow much faster than Ackermann's function. See <a href="https://en.wikipedia.org/wiki/Goodstein%27s_theorem">Wikipedia Entry</a> and/or my <a href="https://blog.computationalcomplexity.org/2012/12/goodstein-sequences-its-his-100th.html">blog post</a> on them.</p><p>6) QUESTION: Of PH, KM, GOOD, which grows fastest? Second fastest? Third fastest? Perhaps some are tied. <br /></p><p>6) QUESTION: We know that GO and CHESS have very high complexity, but are still prim recursive. We know that there are some math games (e.g., <a href="https://en.wikipedia.org/wiki/Hydra_game">the Hydra game</a>) that are not prim recursive. Are there any FUN games whose complexity is NOT prim recursive?</p><p>7) Tarjan's UNION-FIND data structure has amortized complexity roughly O(n alpha(n)) where alpha(n) is the inverse of Ackermann's function. This is also a lower bound. See <a href="https://en.wikipedia.org/wiki/Disjoint-set_data_structure">Wikipedia entr</a>y on disjoint-set data structure. QUESTION: Is Tarjan's UNION-FIND data structure actually used? It can be used to speed up Kruskal's MST algorithm, but that just takes the question back one step: Is MST a problem people really want to solve? I asked Lance and he asked chatty. For the results of that see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/uf.pdf">here</a> . The answer seems to be YES, though I wonder if the speedup that UNION-FIND gives is important. Union-Find is also used in the <a href="https://en.wikipedia.org/wiki/Hoshen%E2%80%93Kopelman_algorithm">Hoshen-Kopelman Algorithm</a> for (to quote Wikipedia) <i>labeling clusters on a grid, where the grid</i> <i>is a regular network of cells, with the cells being either occupied or unoccupied</i>. Other issues: (a) is UNION-FIND hard to code up? Lance tells me that it is easy to code up. (b) Is the constant reasonable? <br /></p><p>8) Is the <a href="https://www.ackermansecurity.com/home-security-systems">Ackerman Security Company</a> called that since they claim that breaking their security is as hard as computing Ackerman's function? Unlikely- they spell it with only one n at the end. Even so, my class believed me when I told them that. <br /></p><p>9) The finite version of Kruskal's Tree Theorem YADA YADA YADA not prim rec. <a href="https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem">Wikipedia Entry</a></p><p>CLYDE: You can't YADA YADA YADA my Uncle Joe!</p><p>BILL: It's my party and you'll cry if you want to, cry if you want to, cry if you want to. (See here for Leslie Gore's song <a href="https://www.youtube.com/watch?v=ft_QfY16CxU">Its my party and I'll cry if I want to </a>which is not about non primitive recursive functions. Also see her sequel <a href="https://www.youtube.com/watch?v=mw9ZQVFS-vA">Judy's turn to cry</a>. A much better song with a better message for teenagers is her <a href="https://www.youtube.com/watch?v=A4omo3xNstE">You don't own me.</a>)<br /></p><p>CLYDE: Oh well. However, I'll make sure to tell that example to my class.</p><p><br /></p><p class="authors">By gasarch</p>
Computational Complexityhttp://blog.computationalcomplexity.org/arXiv: Computational Complexity: Complexity-theoretic foundations of BosonSampling with a linear number of modeshttp://arxiv.org/abs/2312.002862023-12-04T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/quant-ph/1/au:+Bouland_A/0/1/0/all/0/1">Adam Bouland</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Brod_D/0/1/0/all/0/1">Daniel Brod</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Datta_I/0/1/0/all/0/1">Ishaun Datta</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Fefferman_B/0/1/0/all/0/1">Bill Fefferman</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Grier_D/0/1/0/all/0/1">Daniel Grier</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Hernandez_F/0/1/0/all/0/1">Felipe Hernandez</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Oszmaniec_M/0/1/0/all/0/1">Michal Oszmaniec</a></p><p>BosonSampling is the leading candidate for demonstrating quantum
computational advantage in photonic systems. While we have recently seen many
impressive experimental demonstrations, there is still a formidable distance
between the complexity-theoretic hardness arguments and current experiments.
One of the largest gaps involves the ratio of photons to modes: all current
hardness evidence assumes a "high-mode" regime in which the number of linear
optical modes scales at least quadratically in the number of photons. By
contrast, current experiments operate in a "low-mode" regime with a linear
number of modes. In this paper we bridge this gap, bringing the hardness
evidence for the low-mode experiments to the same level as had been previously
established for the high-mode regime. This involves proving a new
worst-to-average-case reduction for computing the Permanent that is robust to
large numbers of row repetitions and also to distributions over matrices with
correlated entries.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: On the $\ell_0$ Isoperimetric Coefficient of Measurable Setshttp://arxiv.org/abs/2312.000152023-12-04T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+V_M/0/1/0/all/0/1">Manuel Fernandez V</a></p><p>In this paper we prove that the $\ell_0$ isoperimetric coefficient for any
axis-aligned cubes, $\psi_{\mathcal{C}}$, is $\Theta(n^{-1/2})$ and that the
isoperimetric coefficient for any measurable body $K$, $\psi_K$, is of order
$O(n^{-1/2})$. As a corollary we deduce that axis-aligned cubes essentially
"maximize" the $\ell_0$ isoperimetric coefficient: There exists a positive
constant $q > 0$ such that $\psi_K \leq q \cdot \psi_{\mathcal{C}}$, whenever
$\mathcal{C}$ is an axis-aligned cube and $K$ is any measurable set. Lastly, we
give immediate applications of our results to the mixing time of
Coordinate-Hit-and-Run for sampling points uniformly from convex bodies.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: A Threshold Greedy Algorithm for Noisy Submodular Maximizationhttp://arxiv.org/abs/2312.001552023-12-04T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Chen_W/0/1/0/all/0/1">Wenjing Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Xing_S/0/1/0/all/0/1">Shuo Xing</a>, <a href="http://arxiv.org/find/cs/1/au:+Crawford_V/0/1/0/all/0/1">Victoria G. Crawford</a></p><p>We consider the optimization problem of cardinality constrained maximization
of a monotone submodular set function $f:2^U\to\mathbb{R}_{\geq 0}$ (SM) with
noisy evaluations of $f$. In particular, it is assumed that we do not have
value oracle access to $f$, but instead for any $X\subseteq U$ and $u\in U$ we
can take samples from a noisy distribution with expected value
$f(X\cup\{u\})-f(X)$. Our goal is to develop algorithms in this setting that
take as few samples as possible, and return a solution with an approximation
guarantee relative to the optimal with high probability. We propose the
algorithm Confident Threshold Greedy (CTG), which is based on the threshold
greedy algorithm of Badanidiyuru and Vondrak [1] and samples adaptively in
order to produce an approximate solution with high probability. We prove that
CTG achieves an approximation ratio arbitrarily close to $1-1/e$, depending on
input parameters. We provide an experimental evaluation on real instances of SM
and demonstrate the sample efficiency of CTG.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Online Graph Coloring with Predictionshttp://arxiv.org/abs/2312.006012023-12-04T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Antoniadis_A/0/1/0/all/0/1">Antonios Antoniadis</a>, <a href="http://arxiv.org/find/cs/1/au:+Broersma_H/0/1/0/all/0/1">Hajo Broersma</a>, <a href="http://arxiv.org/find/cs/1/au:+Meng_Y/0/1/0/all/0/1">Yang Meng</a></p><p>We introduce learning augmented algorithms to the online graph coloring
problem. Although the simple greedy algorithm FirstFit is known to perform
poorly in the worst case, we are able to establish a relationship between the
structure of any input graph $G$ that is revealed online and the number of
colors that FirstFit uses for $G$. Based on this relationship, we propose an
online coloring algorithm FirstFitPredictions that extends FirstFit while
making use of machine learned predictions. We show that FirstFitPredictions is
both \emph{consistent} and \emph{smooth}. Moreover, we develop a novel
framework for combining online algorithms at runtime specifically for the
online graph coloring problem. Finally, we show how this framework can be used
to robustify by combining it with any classical online coloring algorithm (that
disregards the predictions).
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentECCC Papers: TR23-193 | On the randomized complexity of range avoidance, with applications to cryptography and metacomplexity |
Eldon Chung,
Alexander Golovnev,
Zeyong Li,
Maciej Obremski,
Sidhant Saraogi,
Noah Stephens-Davidowitzhttps://eccc.weizmann.ac.il/report/2023/1932023-12-03T10:45:22+00:00
We study the Range Avoidance Problem (Avoid), in which the input is an expanding circuit $C : \{0,1\}^n \to \{0,1\}^{n+1}$, and the goal is to find a $y \in \{0,1\}^{n+1}$ that is not in the image of $C$. We are interested in the randomized complexity of this problem, i.e., in the question of whether there exist efficient randomized algorithms that output a valid solution to $\Avoid$ with probability significantly greater than $1/2$. (Notice that achieving probability $1/2$ is trivial by random guessing.)
Our first main result shows that cryptographic one-way functions exist unless Avoid can be solved efficiently with probability $1-1/n^{C}$ (on efficiently sampleable input distributions). In other words, even a relatively weak notion of hardness of Avoid already implies the existence of all cryptographic primitives in Minicrypt.
In fact, we show something a bit stronger than this. In particular, we introduce two new natural problems, which we call CollisionAvoid and AffineAvoid. Like Avoid, these are total search problems in the polynomial hierarchy. They are provably at least as hard as Avoid, and seem to be notably harder. We show that one-way functions exist if either of these problems is weakly hard on average.
Our second main result shows that in certain settings Avoid can be solved with probability 1 in expected polynomial time, given access to either an oracle that approximates the Kolmogorov-Levin complexity of a bit string, or an oracle that approximates conditional time-bounded Kolmogorov complexity. This shows an interesting connection between Avoid and meta-complexity.
Finally, we discuss the possibility of proving hardness of Avoid. We show barriers preventing simple reductions from hard problems in FNP to Avoid.
ECCC Papershttps://eccc.weizmann.ac.il/ECCC Papers: TR23-192 | A Note On the Universality of Black-box MKtP Solvers |
Noam Mazor,
Rafael Passhttps://eccc.weizmann.ac.il/report/2023/1922023-12-03T10:44:31+00:00
The relationships between various meta-complexity problems are not well understood in the worst-case regime, including whether the search version is harder than the decision version, whether the hardness scales with the ``threshold", and how the hardness of different meta-complexity problems relate to one another, and to the task of function inversion.
In this note, we present resolutions to some of these questions with respect to the black-box analog of these problems. In more detail, let $MK^t_MP[s]$ denote the language consisting of strings $x$ with $K_{M}^t(x) < s(|x|)$, where $K_M^t(x)$ denotes the $t$-bounded Kolmogorov complexity of $x$ with $M$ as the underlying (Universal) Turing machine, and let $search-MK^t_MP[s]$ denote the search version of the same problem.
We show that if there for every Universal Turing machine $U$ there exists a $2^{\alpha n}poly(n)$-size $U$-oracle aided circuit deciding $MK^t_UP [n-O(1)]$, then for every function $s$, and every---not necessarily universal---Turing machine $M$, there exists a $2^{\alpha s(n)}poly(n)$-size $M$-oracle aided circuit solving $search-MK^t_MP[s(n)]$; this in turn yields circuits of roughly the same size for both the Minimum Circuit Size Problem (MCSP), and the function inversion problem, as they can be thought of as instantiating $MK^t_MP$ with particular choices of (a non-universal) TMs $M$ (the circuit emulator for the case of MCSP, and the function evaluation in the case of function inversion).
As a corollary of independent interest, we get that the complexity of black-box function inversion is (roughly) the same as the complexity of black-box deciding $MK^t_UP[n-O(1)]$ for any universal TM $U$; that is, also in the worst-case regime, function inversion is ``equivalent" to deciding $MK^t_UP$, in the black-box setting.
ECCC Papershttps://eccc.weizmann.ac.il/ECCC Papers: TR23-191 | On the Power of Homogeneous Algebraic Formulas |
Nutan Limaye,
Hervé Fournier,
Srikanth Srinivasan,
Sébastien Tavenashttps://eccc.weizmann.ac.il/report/2023/1912023-12-03T09:07:50+00:00
Proving explicit lower bounds on the size of algebraic formulas is a long-standing open problem in the area of algebraic complexity theory. Recent results in the area (e.g. a lower bound against constant-depth algebraic formulas due to Limaye, Srinivasan, and Tavenas (FOCS 2021)) have indicated a way forward for attacking this question: show that we can convert a general algebraic formula to a 'homogeneous' algebraic formula with moderate blow-up in size, and prove strong lower bounds against the latter model.
Here, a homogeneous algebraic formula $F$ for a polynomial $P$ is a formula in which all subformulas compute homogeneous polynomials. In particular, if $P$ is homogeneous of degree $d$, $F$ does not contain subformulas that compute polynomials of degree greater than $d$.
We investigate the feasibility of the above strategy and prove a number of positive and negative results in this direction.
--- Lower bounds against weighted homogeneous formulas: We show the first lower bounds against homogeneous formulas 'of any depth' in the 'weighted' setting. Here, each variable has a given weight and the weight of a monomial is the sum of weights of the variables in it. This result builds on a lower bound of Hrubeš and Yehudayoff (Computational Complexity (2011)) against homogeneous multilinear formulas. This result is strong indication that lower bounds against homogeneous formulas is within reach.
--- Improved (quasi-)homogenization for formulas: A simple folklore argument shows that any formula $F$ for a homogeneous polynomial of degree $d$ can be homogenized with a size blow-up of $d^{O(\log s)}.$ We show that this can be improved superpolynomially over fields of characteristic $0$ as long as $d = s^{o(1)}.$ Such a result was previously only known when $d = (\log s)^{1+o(1)}$ (Raz (J. ACM (2013))). Further, we show how to get rid of the condition on $d$ at the expense of getting a 'quasi-homogenization' result: this means that subformulas can compute polynomials of degree up to poly$(d).$
--- Lower bounds for non-commutative homogenization: A recent result of Dutta, Gesmundo, Ikenmeyer, Jindal and Lysikov (2022) implies that to homogenize algebraic formulas of any depth, it suffices to homogenize 'non-commutative' algebraic formulas of depth just $3$. We are able to show strong lower bounds against such homogenization, suggesting barriers for this approach.
--- No Girard-Newton identities for positive characteristic: In characteristic $0$, it is known how to homogenize constant-depth algebraic formulas with a size blow-up of $\exp(O(\sqrt{d}))$ using the Girard-Newton identities. Finding analogues of these identities in positive characteristic would allow us, paradoxically, to show 'lower bounds' for constant-depth formulas over such fields. We rule out a strong generalization of Girard-Newton identities in the setting of positive characteristic, suggesting that a different approach is required.
ECCC Papershttps://eccc.weizmann.ac.il/ECCC Papers: TR23-190 | From Trees to Polynomials and Back Again: New Capacity Bounds with Applications to TSP |
Leonid Gurvits,
Nathan Klein,
Jonathan Leakehttps://eccc.weizmann.ac.il/report/2023/1902023-12-03T09:05:46+00:00
We give simply exponential lower bounds on the probabilities of a given strongly Rayleigh distribution, depending only on its expectation. This resolves a weak version of a problem left open by Karlin-Klein-Oveis Gharan in their recent breakthrough work on metric TSP, and this resolution leads to a minor improvement of their approximation factor for metric TSP. Our results also allow for a more streamlined analysis of the algorithm.
To achieve these new bounds, we build upon the work of Gurvits-Leake on the use of the productization technique for bounding the capacity of a real stable polynomial. This technique allows one to reduce certain inequalities for real stable polynomials to products of affine linear forms, which have an underlying matrix structure. In this paper, we push this technique further by characterizing the worst-case polynomials via bipartitioned forests. This rigid combinatorial structure yields a clean induction argument, which implies our stronger bounds.
In general, we believe the results of this paper will lead to further improvement and simplification of the analysis of various combinatorial and probabilistic bounds and algorithms.
ECCC Papershttps://eccc.weizmann.ac.il/arXiv: Computational Complexity: Proofs of Equalities NP = coNP = PSPACE: Simplificationhttp://arxiv.org/abs/2311.179392023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Gordeev_L/0/1/0/all/0/1">Lev Gordeev</a>, <a href="http://arxiv.org/find/cs/1/au:+Haeusler_E/0/1/0/all/0/1">Edward Hermann Haeusler</a></p><p>In this paper we present simplified proofs of our results NP = coNP = PSPACE.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Lifting query complexity to time-space complexity for two-way finite automatahttp://arxiv.org/abs/2311.182202023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Zheng_S/0/1/0/all/0/1">Shenggen Zheng</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_Y/0/1/0/all/0/1">Yaqiao Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Pan_M/0/1/0/all/0/1">Minghua Pan</a>, <a href="http://arxiv.org/find/cs/1/au:+Gruska_J/0/1/0/all/0/1">Jozef Gruska</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_L/0/1/0/all/0/1">Lvzhou Li</a></p><p>Time-space tradeoff has been studied in a variety of models, such as Turing
machines, branching programs, and finite automata, etc. While communication
complexity as a technique has been applied to study finite automata, it seems
it has not been used to study time-space tradeoffs of finite automata. We
design a new technique showing that separations of query complexity can be
lifted, via communication complexity, to separations of time-space complexity
of two-way finite automata. As an application, one of our main results exhibits
the first example of a language $L$ such that the time-space complexity of
two-way probabilistic finite automata with a bounded error (2PFA) is
$\widetilde{\Omega}(n^2)$, while of exact two-way quantum finite automata with
classical states (2QCFA) is $\widetilde{O}(n^{5/3})$, that is, we demonstrate
for the first time that exact quantum computing has an advantage in time-space
complexity comparing to classical computing.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Matrix discrepancy and the log-rank conjecturehttp://arxiv.org/abs/2311.185242023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/math/1/au:+Sudakov_B/0/1/0/all/0/1">Benny Sudakov</a>, <a href="http://arxiv.org/find/math/1/au:+Tomon_I/0/1/0/all/0/1">István Tomon</a></p><p>Given an $m\times n$ binary matrix $M$ with $|M|=p\cdot mn$ (where $|M|$
denotes the number of 1 entries), define the discrepancy of $M$ as
$\mbox{disc}(M)=\displaystyle\max_{X\subset [m], Y\subset [n]}\big||M[X\times
Y]|-p|X|\cdot |Y|\big|$. Using semidefinite programming and spectral
techniques, we prove that if $\mbox{rank}(M)\leq r$ and $p\leq 1/2$, then
</p>
<p>$$\mbox{disc}(M)\geq \Omega(mn)\cdot
\min\left\{p,\frac{p^{1/2}}{\sqrt{r}}\right\}.$$
</p>
<p>We use this result to obtain a modest improvement of Lovett's best known
upper bound on the log-rank conjecture. We prove that any $m\times n$ binary
matrix $M$ of rank at most $r$ contains an $(m\cdot 2^{-O(\sqrt{r})})\times
(n\cdot 2^{-O(\sqrt{r})})$ sized all-1 or all-0 submatrix, which implies that
the deterministic communication complexity of any Boolean function of rank $r$
is at most $O(\sqrt{r})$.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: Fully Dynamic Algorithms for Euclidean Steiner Treehttp://arxiv.org/abs/2311.183652023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Chan_T/0/1/0/all/0/1">T-H. Hubert Chan</a>, <a href="http://arxiv.org/find/cs/1/au:+Goranci_G/0/1/0/all/0/1">Gramoz Goranci</a>, <a href="http://arxiv.org/find/cs/1/au:+Jiang_S/0/1/0/all/0/1">Shaofeng H.-C. Jiang</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_B/0/1/0/all/0/1">Bo Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Xue_Q/0/1/0/all/0/1">Quan Xue</a></p><p>The Euclidean Steiner tree problem asks to find a min-cost metric graph that
connects a given set of \emph{terminal} points $X$ in $\mathbb{R}^d$, possibly
using points not in $X$ which are called Steiner points. Even though
near-linear time $(1 + \epsilon)$-approximation was obtained in the offline
setting in seminal works of Arora and Mitchell, efficient dynamic algorithms
for Steiner tree is still open. We give the first algorithm that (implicitly)
maintains a $(1 + \epsilon)$-approximate solution which is accessed via a set
of tree traversal queries, subject to point insertion and deletions, with
amortized update and query time $O(\poly\log n)$ with high probability. Our
approach is based on an Arora-style geometric dynamic programming, and our main
technical contribution is to maintain the DP subproblems in the dynamic setting
efficiently. We also need to augment the DP subproblems to support the tree
traversal queries.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: Sparsifying generalized linear modelshttp://arxiv.org/abs/2311.181452023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Jambulapati_A/0/1/0/all/0/1">Arun Jambulapati</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_J/0/1/0/all/0/1">James R. Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yang P. Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Sidford_A/0/1/0/all/0/1">Aaron Sidford</a></p><p>We consider the sparsification of sums $F : \mathbb{R}^n \to \mathbb{R}$
where $F(x) = f_1(\langle a_1,x\rangle) + \cdots + f_m(\langle a_m,x\rangle)$
for vectors $a_1,\ldots,a_m \in \mathbb{R}^n$ and functions $f_1,\ldots,f_m :
\mathbb{R} \to \mathbb{R}_+$. We show that $(1+\varepsilon)$-approximate
sparsifiers of $F$ with support size $\frac{n}{\varepsilon^2} (\log
\frac{n}{\varepsilon})^{O(1)}$ exist whenever the functions $f_1,\ldots,f_m$
are symmetric, monotone, and satisfy natural growth bounds. Additionally, we
give efficient algorithms to compute such a sparsifier assuming each $f_i$ can
be evaluated efficiently.
</p>
<p>Our results generalize the classic case of $\ell_p$ sparsification, where
$f_i(z) = |z|^p$, for $p \in (0, 2]$, and give the first near-linear size
sparsifiers in the well-studied setting of the Huber loss function and its
generalizations, e.g., $f_i(z) = \min\{|z|^p, |z|^2\}$ for $0 < p \leq 2$. Our
sparsification algorithm can be applied to give near-optimal reductions for
optimizing a variety of generalized linear models including $\ell_p$ regression
for $p \in (1, 2]$ to high accuracy, via solving $(\log n)^{O(1)}$ sparse
regression instances with $m \le n(\log n)^{O(1)}$, plus runtime proportional
to the number of nonzero entries in the vectors $a_1, \dots, a_m$.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, $s$-$t$ Shortest Path, and Minimum-Cost Flowhttp://arxiv.org/abs/2311.182952023-12-01T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Chen_L/0/1/0/all/0/1">Li Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Kyng_R/0/1/0/all/0/1">Rasmus Kyng</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yang P. Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Meierhans_S/0/1/0/all/0/1">Simon Meierhans</a>, <a href="http://arxiv.org/find/cs/1/au:+Gutenberg_M/0/1/0/all/0/1">Maximilian Probst Gutenberg</a></p><p>We give the first almost-linear time algorithms for several problems in
incremental graphs including cycle detection, strongly connected component
maintenance, $s$-$t$ shortest path, maximum flow, and minimum-cost flow. To
solve these problems, we give a deterministic data structure that returns a
$m^{o(1)}$-approximate minimum-ratio cycle in fully dynamic graphs in amortized
$m^{o(1)}$ time per update. Combining this with the interior point method
framework of Brand-Liu-Sidford (STOC 2023) gives the first almost-linear time
algorithm for deciding the first update in an incremental graph after which the
cost of the minimum-cost flow attains value at most some given threshold $F$.
By rather direct reductions to minimum-cost flow, we are then able to solve the
problems in incremental graphs mentioned above.
</p>
<p>At a high level, our algorithm dynamizes the $\ell_1$ oblivious routing of
Rozho\v{n}-Grunau-Haeupler-Zuzic-Li (STOC 2022), and develops a method to
extract an approximate minimum ratio cycle from the structure of the oblivious
routing. To maintain the oblivious routing, we use tools from concurrent work
of Kyng-Meierhans-Probst Gutenberg which designed vertex sparsifiers for
shortest paths, in order to maintain a sparse neighborhood cover in fully
dynamic graphs.
</p>
<p>To find a cycle, we first show that an approximate minimum ratio cycle can be
represented as a fundamental cycle on a small set of trees resulting from the
oblivious routing. Then, we find a cycle whose quality is comparable to the
best tree cycle. This final cycle query step involves vertex and edge
sparsification procedures reminiscent of previous works, but crucially requires
a more powerful dynamic spanner which can handle far more edge insertions. We
build such a spanner via a construction that hearkens back to the classic
greedy spanner algorithm.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentDavid Eppstein: Linkagehttps://11011110.github.io/blog/2023/11/30/linkage2023-11-30T22:37:00+00:00
<ul>
<li>
<p><a href="https://mathstodon.xyz/@fortnow@fediscience.org/111421672592114335">Lance Fortnow notices fewer faculty job ads in this year’s November <em>CRA News</em></a> and asks: “is there a real drop in hiring, or something else?”</p>
</li>
<li>
<p><a href="https://www.uottawa.ca/faculty-engineering/news-all/computer-science-professor-awarded-university-research-chair">Vida Dujmović awarded University of Ottawa Research Chair</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@DavidWood/111423964645469502">\(\mathbb{M}\)</a>)</span>.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@andrejbauer/111433678550260583">POPL 2024 registration is on the order of £2000?</a> It’s convenient that this ridiculous fee is nowhere visible on the <a href="https://popl24.sigplan.org/">POPL web site</a>. This needs to be shown to all the computational geometers who would still like SoCG to return to the ACM fold. (Last year’s early registration fee: $375.)</p>
</li>
<li>
<p><a href="https://doi.org/10.1080/00029890.2023.2230860">Characterization of the quadrilaterals that admit billiard circuits</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@divbyzero/111364598692794584">\(\mathbb{M}\)</a>)</span>, Katherine Knox. This means that a billiard ball can bounce once off each side in order before returning to its starting position. <a href="http://www.girlsangle.org/page/bulletin-archive/GABv16n06E.pdf">The author proved this when she was in 7th grade!</a> See also <a href="https://www.geogebra.org/m/sd5c52pt">Geogebra applet</a>.</p>
</li>
<li>
<p><a href="https://heldenreis.nl/2023/09/creatief-of-het-verhaal-van-de-corolla">The corolla</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@Heldinne@mastodon.social/111136626464851823">\(\mathbb{M}\)</a>)</span>, something like a crown-shaped 3d flexagon decorated with Escher prints, with making-of blog post.</p>
</li>
<li>
<p><a href="https://aperiodical.com/2023/11/mathematical-drawing-hacks/">Mathematical drawing hacks</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@aperiodical/111459770806023952">\(\mathbb{M}\)</a>)</span>, including tips on how to draw Möbius strips, tetrahedra, and curly brackets.</p>
</li>
<li>
<p><a href="https://diff.wikimedia.org/2023/06/05/open-access-to-heritage-images-is-becoming-increasingly-difficult-in-italy/">Open access to heritage images is becoming increasingly difficult in Italy</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111468542343850070">\(\mathbb{M}\)</a>)</span>. The background to this is an Italian court case last year in which The Gallerie dell’Accademia di Venezia, a public museum in Venice, <a href="https://news.artnet.com/news/ravensburger-da-vinci-vitruvian-man-puzzle-ruling-gallerie-dell-accademia-2276738">won a lawsuit</a> forcing jigsaw puzzle maker Ravensburger to <a href="https://communia-association.org/2023/03/01/the-vitruvian-man-a-puzzling-case-for-the-public-domain/">pay royalties for reproductions of far far out of copyright works by Leonardo Da Vinci</a>. This decision poses a threat not just to toy companies, but to Wikipedia and other users of public domain artworks.</p>
</li>
<li>
<p><a href="https://elpais.com/mexico/2023-11-19/el-ajedrez-judicial-de-la-ministra-yasmin-esquivel-para-sepultar-el-informe-de-la-unam-sobre-su-tesis.html">How to plagiarize and get away with it</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111478049561681896">\(\mathbb{M}\)</a>,</span> <a href="https://retractionwatch.com/2023/11/25/weekend-reads-a-scientific-fraud-epidemic-censorship-by-retraction-buying-and-selling-articles/">via</a>): become powerful enough that your friends in the court system declare you to be the true author of your bachelor’s thesis and enjoin your university from publishing their investigations. <em>El Pais</em> tells the story of Mexican supreme court justice <a href="https://en.wikipedia.org/wiki/Yasm%C3%ADn_Esquivel_Mossa">Yasmín Esquivel</a>.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@robinhouston/111478689679139028">Robin Houston asks how few adjacent cubes in a \(4\times 4\times 4\) grid need to be glued to make the whole thing rigid</a>. I found a solution with 50 glued pairs, the current record, but I’m not convinced its optimal. <a href="/blog/assets/2023/4x4x4-interlock.svg">Image link is a spoiler</a>.</p>
</li>
<li>
<p><a href="https://youtu.be/KD_hRn_97RI">Slide-glide cyclides</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@henryseg/111471633209073685">\(\mathbb{M}\)</a>)</span>. Banana-shaped pieces (geared at their shared base) that slide past each other to provide a mechanical demonstration of the equality of area of a sphere and a disk of twice the radius. 3d print and video by Henry Segerman.</p>
</li>
<li>
<p><a href="https://www.sciencenews.org/article/spirals-inspire-walking-aids-people-disabilities">Why it’s important to choose the right kind of spiral in your designs</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111491282716621022">\(\mathbb{M}\)</a>)</span>. Bernoulli merely ended up with a permanent mistake on his tombstone (an Archimedean spiral replacing his requested logarithmic spiral). But if you design a rock-climbing cam with a non-logarithmic spiral, or with the wrong logarithmic spiral, its incorrect angle against the rock may fail to protect you from a fall. And spiral wheels with a different carefully chosen shape can be used to design rocking self-propelled skateboards, crutches that help push their users forward, and therapeutic shoes that lengthen the gait of one foot, training people with gait impairments to walk better.</p>
</li>
<li>
<p><a href="https://academia.stackexchange.com/questions/204370/what-should-i-do-if-i-suspect-one-of-the-journal-reviews-i-got-is-al-generated">What should I do if I suspect one of the journal reviews I got is Al-generated?</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@elduvelle@neuromatch.social/111494847703954840">\(\mathbb{M}\)</a>)</span> Consensus answer: complain to the editor.</p>
</li>
<li>
<p>The first mathematics publication I know of for the <a href="https://en.wikipedia.org/wiki/Binary_tiling">binary tiling or Böröczky tiling of the hyperbolic plane</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111502588297348659">\(\mathbb{M}\)</a>)</span> was in 1974 by Károly Böröczky. The same recursive structure appeared earlier, in a 1957 work of M. C. Escher, “<a href="https://www.escherinhetpaleis.nl/escher-today/woodblocks-and-the-regular-division-of-the-plane/?lang=en">Regular Division of the Plane VI</a>”. I thought that might be the beginning of the story, but then I noticed that the <a href="https://en.wikipedia.org/wiki/Smith_chart">Smith chart</a> used in microwave engineering looks very similar: compare the binary tiling (below left) to the Smith chart (right) in the two Wikipedia images below.</p>
<p style="text-align:center"><img src="/blog/assets/2023/Poincare-binary-tiling.png" alt="Binary tiling, CC-BY-SA 4.0 image by Why not butterfly, 19 July 2023, from https://commons.wikimedia.org/wiki/File:Hyperbolic_binary_tiling.png" style="width:45%" /> <img src="/blog/assets/2023/Smith-chart.svg" alt="Smith chart, CC-BY-SA 3.0 image by Wdwd, 19 September 2010, from https://commons.wikimedia.org/wiki/File:Smith_chart_gen.svg" style="width:45%" /></p>
<p>The same subdivision on horocycles and perpendicular hyperbolic lines is present in both images, and the same effect of refining the subdivision like a quadtree as you move further away from the center of the drawing. But the Smith charts perform the subdivision in a more irregular pattern, rather than subdividing at each successive horocycle. The original 1939 Smith chart looks similar, but rotated \(90^\circ\) and with an even more irregular pattern of subdivisions. See page 33 of <a href="https://www.worldradiohistory.com/Archive-Electronics/30s/Electronics-1939-01.pdf"><em>Electronics</em> magazine from January 1939</a>.</p>
</li>
</ul><p class="authors">By David Eppstein</p>
David Eppsteinhttps://11011110.github.io/blog/CCI: jobs: Visiting Lecturer/Senior Lecturer/Assistant Professor for “Interactive Narrative” course at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/visiting-lecturer-senior-lecturer-assistant-professor-for-interactive-narrative-course-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:24:02+00:00
<p>Responsibilities:</p>
<p>Design the syllabus and teach to develop storylines and characters, character profiles, bios, and sketches, scripts of in-game dialogues, storyboards; Design grading metrics.</p>
<p>Qualifications:</p>
<p>Technical or humanitarian background;<br />
Excellent command of Game Design and Gamification processes;<br />
Experience of teaching young adults.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Visiting Lecturer / Assistant Professor for “Game Design and Gamification” course at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/visiting-lecturer-assistant-professor-for-game-design-and-gamification-course-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:20:55+00:00
<p>Responsibilities:</p>
<p>Design the syllabus and teach Game Design Methodologies, Gamification Processes, and Player Experiences;<br />
Teach such as project-based learning, active discussions, role play, cases, etc.; Design grading metrics.</p>
<p>Qualifications:</p>
<p>Technical or humanitarian background;<br />
Excellent command of Game Design and Gamification processes;<br />
Experience of teaching young adults.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Lecturer/Assistant Professor in Computer Science/ Games Specialty at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/lecturer-assistant-professor-in-computer-science-games-specialty-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:17:28+00:00
<p>Qualifications:</p>
<p>Ph.D. degree and published research in Computer Science .<br />
Focus on AI, GameDev and Data Science.</p>
<p>Responsibilities:</p>
<p>Frontal teaching the courses: Game History, Unity Game Development, Interactive Narrative, and Business of Games, etc. during the academic year; lead research activities, advise and mentor students.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Full Professor in Robotics & Computer Vision at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/full-professor-in-robotics-computer-vision-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:13:30+00:00
<p>Qualifications:</p>
<p>PhD degree<br />
practical experience in Sensor Fusion, SLAM, Perception, ML, RL, DL strong record of published papers in CV for robotic application experience as PI in projects</p>
<p>Responsibilities:</p>
<p>Up to 120 hours of frontal teaching in 4 courses per year, leading research, advising and mentoring students, other activities for developing the intellectual environment.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Associate Professor in Data Science & AI at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/associate-professor-in-data-science-ai-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:10:53+00:00
<p>Qualifications:<br />
PhD degree<br />
2+ years of Assistant Professorship<br />
record of published papers in A* conferences</p>
<p>Responsibilities:<br />
Up to 120 academic hours of frontal teaching in 4 courses per year, leading high quality research, advising and mentoring students, other activities related to developing and maintaining the intellectual and cultural environment of the University.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Associate/Full Professor in Software Engineering at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/associate-full-professor-in-software-engineering-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:08:26+00:00
<p>Qualifications:<br />
PhD degree and published research in software quality, software architecture experience of leading software development teams<br />
industry experience</p>
<p>Responsibilities:<br />
Up to 120 academic hours of frontal teaching in 4 courses per year, leading high quality research, advising and mentoring students, other activities for developing and maintaining the intellectual environment.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Assistant Professor in Software Development and Engineering at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/assistant-professor-in-software-development-and-engineering-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:06:11+00:00
<p>Qualifications:</p>
<p>PhD degree and published research in programming languages, compilers, databases, OS</p>
<p>industry experience</p>
<p>Responsibilities:<br />
Up to 120 academic hours of frontal teaching in 4 courses per year, leading high quality research, advising and mentoring students, other activities related to developing and maintaining the intellectual and cultural environment of the University.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Assistant Professor in Robotics & Computer Vision at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/assistant-professor-in-robotics-computer-vision-at-innopolis-university-apply-by-january-30-2024/2023-11-30T13:04:11+00:00
<p>Qualifications:</p>
<p>PhD degree</p>
<p>TAship experience</p>
<p>experience in Machine Learning</p>
<p>strong record of published papers</p>
<p>Responsibilities:</p>
<p>Up to 120 academic hours of frontal teaching in 4 courses per year, leading high quality research, advising and mentoring students, other activities related to developing and maintaining the intellectual and cultural environment of the University.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Assistant Professor in Data Science & AI at Innopolis University (apply by January 30, 2024)http://cstheory-jobs.org/2023/11/30/assistant-professor-in-data-science-ai-at-innopolis-university-apply-by-january-30-2024/2023-11-30T12:42:18+00:00
<p>Qualifications:</p>
<p>PhD degree</p>
<p>depth of expertise</p>
<p>experience of being a TA</p>
<p>strong record of published papers</p>
<p>Responsibilities:</p>
<p>Up to 120 academic hours of frontal teaching in 4 courses per year, leading high quality research, advising and mentoring students, other activities related to developing and maintaining the intellectual and cultural environment of the University.</p>
<p>Website: <a href="https://career.innopolis.university/en/job/">https://career.innopolis.university/en/job/</a><br />
Email: faculty@innopolis.ru</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgCCI: jobs: Faculty positions in quantum computing and quantum algorithms at EPFL (apply by December 15, 2023)http://cstheory-jobs.org/2023/11/30/faculty-positions-in-quantum-computing-and-quantum-algorithms-at-epfl-apply-by-december-15-2023/2023-11-30T10:10:37+00:00
<p>EPFL invite applications for a faculty position in all areas of quantum computing and quantum algorithms.</p>
<p>Screening starts at December 15 but later applicants will also be considered. Reach out to Ola Svensson with any questions about the search, position, and school!</p>
<p>Website: <a href="http://go.epfl.ch/qc-jobs">http://go.epfl.ch/qc-jobs</a><br />
Email: ola.svensson@epfl.ch</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgarXiv: Computational Complexity: On the Complexity of the Median and Closest Permutation Problemshttp://arxiv.org/abs/2311.172242023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Cunha_L/0/1/0/all/0/1">Luís Cunha</a>, <a href="http://arxiv.org/find/cs/1/au:+Sau_I/0/1/0/all/0/1">Ignasi Sau</a>, <a href="http://arxiv.org/find/cs/1/au:+Souza_U/0/1/0/all/0/1">Uéverton Souza</a></p><p>Genome rearrangements are events where large blocks of DNA exchange places
during evolution. The analysis of these events is a promising tool for
understanding evolutionary genomics, providing data for phylogenetic
reconstruction based on genome rearrangement measures. Many pairwise
rearrangement distances have been proposed, based on finding the minimum number
of rearrangement events to transform one genome into the other, using some
predefined operation. When more than two genomes are considered, we have the
more challenging problem of rearrangement-based phylogeny reconstruction. Given
a set of genomes and a distance notion, there are at least two natural ways to
define the "target" genome. On the one hand, finding a genome that minimizes
the sum of the distances from this to any other, called the median genome.
Finding a genome that minimizes the maximum distance to any other, called the
closest genome. Considering genomes as permutations, some distance metrics have
been extensively studied. We investigate median and closest problems on
permutations over the metrics: breakpoint, swap, block-interchange,
short-block-move, and transposition. In biological matters some values are
usually small, such as the solution value d or the number k of input
permutations. For each of these metrics and parameters d or k, we analyze the
closest and the median problems from the viewpoint of parameterized complexity.
We obtain the following results: NP-hardness for finding the median/closest
permutation for some metrics, even for k = 3; Polynomial kernels for the
problems of finding the median permutation of all studied metrics, considering
the target distance d as parameter; NP-hardness result for finding the closest
permutation by short-block-moves; FPT algorithms and infeasibility of
polynomial kernels for finding the closest permutation for some metrics
parameterized by the target distance d.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: A Local Approach to Studying the Time and Space Complexity of Deterministic and Nondeterministic Decision Treeshttp://arxiv.org/abs/2311.173062023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Durdymyradov_K/0/1/0/all/0/1">Kerven Durdymyradov</a>, <a href="http://arxiv.org/find/cs/1/au:+Moshkov_M/0/1/0/all/0/1">Mikhail Moshkov</a></p><p>In this paper, we study arbitrary infinite binary information systems each of
which consists of an infinite set called universe and an infinite set of
two-valued functions (attributes) defined on the universe. We consider the
notion of a problem over information system, which is described by a finite
number of attributes and a mapping associating a decision to each tuple of
attribute values. As algorithms for problem solving, we investigate
deterministic and nondeterministic decision trees that use only attributes from
the problem description. Nondeterministic decision trees are representations of
decision rule systems that sometimes have less space complexity than the
original rule systems. As time and space complexity, we study the depth and the
number of nodes in the decision trees. In the worst case, with the growth of
the number of attributes in the problem description, (i) the minimum depth of
deterministic decision trees grows either as a logarithm or linearly, (ii) the
minimum depth of nondeterministic decision trees either is bounded from above
by a constant or grows linearly, (iii) the minimum number of nodes in
deterministic decision trees has either polynomial or exponential growth, and
(iv) the minimum number of nodes in nondeterministic decision trees has either
polynomial or exponential growth. Based on these results, we divide the set of
all infinite binary information systems into three complexity classes. This
allows us to identify nontrivial relationships between deterministic decision
trees and decision rules systems represented by nondeterministic decision
trees. For each class, we study issues related to time-space trade-off for
deterministic and nondeterministic decision trees.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Violating Constant Degree Hypothesis Requires Breaking Symmetryhttp://arxiv.org/abs/2311.174402023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Kawalek_P/0/1/0/all/0/1">Piotr Kawałek</a>, <a href="http://arxiv.org/find/cs/1/au:+Weiss_A/0/1/0/all/0/1">Armin Weiß</a></p><p>The Constant Degree Hypothesis was introduced by Barrington et. al. (1990) to
study some extensions of $q$-groups by nilpotent groups and the power of these
groups in a certain computational model. In its simplest formulation, it
establishes exponential lower bounds for $\mathrm{AND}_d \circ \mathrm{MOD}_m
\circ \mathrm{MOD}_q$ circuits computing AND of unbounded arity $n$ (for
constant integers $d,m$ and a prime $q$). While it has been proved in some
special cases (including $d=1$), it remains wide open in its general form for
over 30 years.
</p>
<p>In this paper we prove that the hypothesis holds when we restrict our
attention to symmetric circuits with $m$ being a prime. While we build upon
techniques by Grolmusz and Tardos (2000), we have to prove a new symmetric
version of their Degree Decreasing Lemma and apply it in a highly non-trivial
way. Moreover, to establish the result we perform a careful analysis of
automorphism groups of $\mathrm{AND} \circ \mathrm{MOD}_m$ subcircuits and
study the periodic behaviour of the computed functions.
</p>
<p>Finally, our methods also yield lower bounds when $d$ is treated as a
function of $n$.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Fast list-decoding of univariate multiplicity and folded Reed-Solomon codeshttp://arxiv.org/abs/2311.178412023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Goyal_R/0/1/0/all/0/1">Rohan Goyal</a>, <a href="http://arxiv.org/find/cs/1/au:+Harsha_P/0/1/0/all/0/1">Prahladh Harsha</a>, <a href="http://arxiv.org/find/cs/1/au:+Kumar_M/0/1/0/all/0/1">Mrinal Kumar</a>, <a href="http://arxiv.org/find/cs/1/au:+Shankar_A/0/1/0/all/0/1">Ashutosh Shankar</a></p><p>We show that the known list-decoding algorithms for univariate multiplicity
and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time.
This yields, to our knowledge, the first known family of codes that can be
decoded in nearly linear time, even as they approach the list decoding
capacity. Univariate multiplicity codes and FRS codes are natural variants of
Reed-Solomon codes that were discovered and studied for their applications to
list-decoding. It is known that for every $\epsilon >0$, and rate $R \in
(0,1)$, there exist explicit families of these codes that have rate $R$ and can
be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list
size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013)
and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work,
we present randomized algorithms that perform the above tasks in nearly linear
time. Our algorithms have two main components. The first builds upon the
lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who
designed a nearly linear time list-decoding algorithm for Reed-Solomon codes
approaching the Johnson radius. As part of the second component, we design
nearly-linear time algorithms for two natural algebraic problems. The first
algorithm solves linear differential equations of the form $Q\left(x, f(x),
\frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form
$Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The
second solves functional equations of the form $Q\left(x, f(x), f(\gamma x),
\dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field
element. These algorithms can be viewed as generalizations of classical
algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for
computing the modular inverse of a power series, and might be of independent
interest.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: Exceptional Mechanical Performance by Spatial Printing with Continuous Fiberhttp://arxiv.org/abs/2311.172652023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Fang_G/0/1/0/all/0/1">Guoxin Fang</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_T/0/1/0/all/0/1">Tianyu Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_Y/0/1/0/all/0/1">Yuming Huang</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_Z/0/1/0/all/0/1">Zhizhou Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Masania_K/0/1/0/all/0/1">Kunal Masania</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_C/0/1/0/all/0/1">Charlie C.L. Wang</a></p><p>This work explores a spatial printing method to fabricate continuous
fiber-reinforced thermoplastic composites (CFRTPCs), which can achieve
exceptional mechanical performance. For models giving complex 3D stress
distribution under loads, typical planar-layer based fiber placement usually
fails to provide sufficient reinforcement due to their orientations being
constrained to planes. The effectiveness of fiber reinforcement could be
maximized by using multi-axis additive manufacturing (MAAM) to better control
the orientation of continuous fibers in 3D-printed composites. Here, we propose
a computational approach to generate 3D toolpaths that satisfy two major
reinforcement objectives: 1) following the maximal stress directions in
critical regions and 2) connecting multiple load-bearing regions by continuous
fibers. Principal stress lines are first extracted in an input solid model to
identify critical regions. Curved layers aligned with maximal stresses in these
critical regions are generated by computing an optimized scalar field and
extracting its iso-surfaces. Then, topological analysis and operations are
applied to each curved layer to generate a computational domain that preserves
fiber continuity between load-bearing regions. Lastly, continuous fiber
toolpaths aligned with maximal stresses are generated on each surface layer by
computing an optimized scalar field and extracting its iso-curves. A hardware
system with dual robotic arms is employed to conduct the physical MAAM tasks
depositing polymer or fiber reinforced polymer composite materials by applying
a force normal to the extrusion plane to aid consolidation. When comparing to
planar-layer based printing results in tension, up to 644% breaking forces and
240% stiffness are observed on shapes fabricated by our spatial printing
method.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Constructing Optimal $L_{\infty}$ Star Discrepancy Setshttp://arxiv.org/abs/2311.174632023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Clement_F/0/1/0/all/0/1">François Clément</a>, <a href="http://arxiv.org/find/cs/1/au:+Doerr_C/0/1/0/all/0/1">Carola Doerr</a>, <a href="http://arxiv.org/find/cs/1/au:+Klamroth_K/0/1/0/all/0/1">Kathrin Klamroth</a>, <a href="http://arxiv.org/find/cs/1/au:+Paquete_L/0/1/0/all/0/1">Luís Paquete</a></p><p>The $L_{\infty}$ star discrepancy is a very well-studied measure used to
quantify the uniformity of a point set distribution. Constructing optimal point
sets for this measure is seen as a very hard problem in the discrepancy
community. Indeed, optimal point sets are, up to now, known only for $n\leq 6$
in dimension 2 and $n \leq 2$ for higher dimensions. We introduce in this paper
mathematical programming formulations to construct point sets with as low
$L_{\infty}$ star discrepancy as possible. Firstly, we present two models to
construct optimal sets and show that there always exist optimal sets with the
property that no two points share a coordinate. Then, we provide possible
extensions of our models to other measures, such as the extreme and periodic
discrepancies. For the $L_{\infty}$ star discrepancy, we are able to compute
optimal point sets for up to 21 points in dimension 2 and for up to 8 points in
dimension 3. For $d=2$ and $n\ge 7$ points, these point sets have around a 50%
lower discrepancy than the current best point sets, and show a very different
structure.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Improving embedding of graphs with missing data by soft manifoldshttp://arxiv.org/abs/2311.175982023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Marinoni_A/0/1/0/all/0/1">Andrea Marinoni</a>, <a href="http://arxiv.org/find/cs/1/au:+Lio_P/0/1/0/all/0/1">Pietro Lio'</a>, <a href="http://arxiv.org/find/cs/1/au:+Barp_A/0/1/0/all/0/1">Alessandro Barp</a>, <a href="http://arxiv.org/find/cs/1/au:+Jutten_C/0/1/0/all/0/1">Christian Jutten</a>, <a href="http://arxiv.org/find/cs/1/au:+Girolami_M/0/1/0/all/0/1">Mark Girolami</a></p><p>Embedding graphs in continous spaces is a key factor in designing and
developing algorithms for automatic information extraction to be applied in
diverse tasks (e.g., learning, inferring, predicting). The reliability of graph
embeddings directly depends on how much the geometry of the continuous space
matches the graph structure. Manifolds are mathematical structure that can
enable to incorporate in their topological spaces the graph characteristics,
and in particular nodes distances. State-of-the-art of manifold-based graph
embedding algorithms take advantage of the assumption that the projection on a
tangential space of each point in the manifold (corresponding to a node in the
graph) would locally resemble a Euclidean space. Although this condition helps
in achieving efficient analytical solutions to the embedding problem, it does
not represent an adequate set-up to work with modern real life graphs, that are
characterized by weighted connections across nodes often computed over sparse
datasets with missing records. In this work, we introduce a new class of
manifold, named soft manifold, that can solve this situation. In particular,
soft manifolds are mathematical structures with spherical symmetry where the
tangent spaces to each point are hypocycloids whose shape is defined according
to the velocity of information propagation across the data points. Using soft
manifolds for graph embedding, we can provide continuous spaces to pursue any
task in data analysis over complex datasets. Experimental results on
reconstruction tasks on synthetic and real datasets show how the proposed
approach enable more accurate and reliable characterization of graphs in
continuous spaces with respect to the state-of-the-art.
</p>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: An Efficient Algorithm for Unbalanced 1D Transportationhttp://arxiv.org/abs/2311.177042023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Gouvine_G/0/1/0/all/0/1">Gabriel Gouvine</a></p><p>Optimal transport (OT) and unbalanced optimal transport (UOT) are central in
many machine learning, statistics and engineering applications. 1D OT is easily
solved, with complexity O(n log n), but no efficient algorithm was known for 1D
UOT. We present a new approach that leverages the successive shortest path
algorithm for the corresponding network flow problem. By employing a suitable
representation, we bundle together multiple steps that do not change the cost
of the shortest path. We prove that our algorithm solves 1D UOT in O(n log n),
closing the gap.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Lower Bounds on Adaptive Sensing for Matrix Recoveryhttp://arxiv.org/abs/2311.172812023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Kacham_P/0/1/0/all/0/1">Praneeth Kacham</a>, <a href="http://arxiv.org/find/cs/1/au:+Woodruff_D/0/1/0/all/0/1">David P Woodruff</a></p><p>We study lower bounds on adaptive sensing algorithms for recovering low rank
matrices using linear measurements. Given an $n \times n$ matrix $A$, a general
linear measurement $S(A)$, for an $n \times n$ matrix $S$, is just the inner
product of $S$ and $A$, each treated as $n^2$-dimensional vectors. By
performing as few linear measurements as possible on a rank-$r$ matrix $A$, we
hope to construct a matrix $\hat{A}$ that satisfies $\|A - \hat{A}\|_F^2 \le
c\|A\|_F^2$, for a small constant $c$. It is commonly assumed that when
measuring $A$ with $S$, the response is corrupted with an independent Gaussian
random variable of mean $0$ and variance $\sigma^2$. Cand\'es and Plan study
non-adaptive algorithms for low rank matrix recovery using random linear
measurements.
</p>
<p>At a certain noise level, it is known that their non-adaptive algorithms need
to perform $\Omega(n^2)$ measurements, which amounts to reading the entire
matrix. An important question is whether adaptivity helps in decreasing the
overall number of measurements. We show that any adaptive algorithm that uses
$k$ linear measurements in each round and outputs an approximation to the
underlying matrix with probability $\ge 9/10$ must run for $t =
\Omega(\log(n^2/k)/\log\log n)$ rounds showing that any adaptive algorithm
which uses $n^{2-\beta}$ linear measurements in each round must run for
$\Omega(\log n/\log\log n)$ rounds to compute a reconstruction with probability
$\ge 9/10$. Hence any adaptive algorithm that has $o(\log n/\log\log n)$ rounds
must use an overall $\Omega(n^2)$ linear measurements. Our techniques also
readily extend to obtain lower bounds on adaptive algorithms for tensor
recovery and obtain measurement-vs-rounds trade-off for many sensing problems
in numerical linear algebra, such as spectral norm low rank approximation,
Frobenius norm low rank approximation, singular vector approximation, and more.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Asynchronous Merkle Treeshttp://arxiv.org/abs/2311.174412023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Kharangate_A/0/1/0/all/0/1">Anoushk Kharangate</a></p><p>Merkle trees have become a widely successful cryptographic data structure.
Enabling a vast variety of applications from checking for inconsistencies in
databases like Dynamo to essential tools like Git to large scale distributed
systems like Bitcoin and other blockchains. There have also been various
versions of Merkle trees like Jellyfish Merkle Trees and Sparse Merkle Trees
designed for different applications. However, one key drawback of all these
Merkle trees is that with a large data set the cost of computing the tree
increases significantly, moreover insert operations on a single leaf require
re-building the entire tree. For certain use cases building the tree this way
is acceptable, however in environments where compute time needs to be as low as
possible and where data is processed in parallel, we are presented with a need
for asynchronous computation. This paper proposes a solution where given a
batch of data that has to be processed concurrently, a Merkle Tree can be
constructed from the batch asynchronously without needing to recalculate the
tree for every insert.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Dynamic Programming Algorithms for Discovery of Antibiotic Resistance in Microbial Genomeshttp://arxiv.org/abs/2311.175382023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/q-bio/1/au:+Helal_M/0/1/0/all/0/1">Manal Helal</a>, <a href="http://arxiv.org/find/q-bio/1/au:+Sintchenko_V/0/1/0/all/0/1">Vitali Sintchenko</a></p><p>The translation of comparative genomics into clinical decision support tools
often depends on the quality of sequence alignments. However, currently used
methods of multiple sequence alignments suffer from significant biases and
problems with aligning diverged sequences. The objective of this study was to
develop and test a new multiple sequence alignment (MSA) algorithm suitable for
the high-throughput comparative analysis of different microbial genomes. This
algorithm employs an innovative tensor indexing method for partitioning the
dynamic programming hyper-cube space for parallel processing. We have used the
clinically relevant task of identifying regions that determine resistance to
antibiotics to test the new algorithm and to compare its performance with
existing MSA methods. The new method "mmDst" performed better than existing MSA
algorithms for more divergent sequences because it employs a simultaneous
alignment scoring recurrence, which effectively approximated the score for edge
missing cell scores that fall outside the scoring region.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: The Symmetric alpha-Stable Privacy Mechanismhttp://arxiv.org/abs/2311.177892023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Zawacki_C/0/1/0/all/0/1">Christopher Zawacki</a>, <a href="http://arxiv.org/find/cs/1/au:+Abed_E/0/1/0/all/0/1">Eyad Abed</a></p><p>With the rapid growth of digital platforms, there is increasing apprehension
about how personal data is being collected, stored, and used by various
entities. These concerns range from data breaches and cyber-attacks to
potential misuse of personal information for targeted advertising and
surveillance. As a result, differential privacy (DP) has emerged as a prominent
tool for quantifying a system's level of protection. The Gaussian mechanism is
commonly used because the Gaussian density is closed under convolution, a
common method utilized when aggregating datasets. However, the Gaussian
mechanism only satisfies approximate differential privacy. In this work, we
present novel analysis of the Symmetric alpha-Stable (SaS) mechanism. We prove
that the mechanism is purely differentially private while remaining closed
under convolution. From our analysis, we believe the SaS Mechanism is an
appealing choice for privacy focused applications.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: A quasi-polynomial time algorithm for Multi-Dimensional Scaling via LP hierarchieshttp://arxiv.org/abs/2311.178402023-11-30T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Bakshi_A/0/1/0/all/0/1">Ainesh Bakshi</a>, <a href="http://arxiv.org/find/cs/1/au:+Cohen_Addad_V/0/1/0/all/0/1">Vincent Cohen-Addad</a>, <a href="http://arxiv.org/find/cs/1/au:+Hopkins_S/0/1/0/all/0/1">Samuel B. Hopkins</a>, <a href="http://arxiv.org/find/cs/1/au:+Jayaram_R/0/1/0/all/0/1">Rajesh Jayaram</a>, <a href="http://arxiv.org/find/cs/1/au:+Lattanzi_S/0/1/0/all/0/1">Silvio Lattanzi</a></p><p>Multi-dimensional Scaling (MDS) is a family of methods for embedding
pair-wise dissimilarities between $n$ objects into low-dimensional space. MDS
is widely used as a data visualization tool in the social and biological
sciences, statistics, and machine learning. We study the Kamada-Kawai
formulation of MDS: given a set of non-negative dissimilarities $\{d_{i,j}\}_{i
, j \in [n]}$ over $n$ points, the goal is to find an embedding
$\{x_1,\dots,x_n\} \subset \mathbb{R}^k$ that minimizes \[ \text{OPT} =
\min_{x} \mathbb{E}_{i,j \in [n]} \left[ \left(1-\frac{\|x_i -
x_j\|}{d_{i,j}}\right)^2 \right] \]
</p>
<p>Despite its popularity, our theoretical understanding of MDS is extremely
limited. Recently, Demaine, Hesterberg, Koehler, Lynch, and Urschel
(<a href="/abs/2109.11505">arXiv:2109.11505</a>) gave the first approximation algorithm with provable
guarantees for Kamada-Kawai, which achieves an embedding with cost $\text{OPT}
+\epsilon$ in $n^2 \cdot 2^{\tilde{\mathcal{O}}(k \Delta^4 / \epsilon^2)}$
time, where $\Delta$ is the aspect ratio of the input dissimilarities. In this
work, we give the first approximation algorithm for MDS with quasi-polynomial
dependency on $\Delta$: for target dimension $k$, we achieve a solution with
cost $\mathcal{O}(\text{OPT}^{ \hspace{0.04in}1/k } \cdot \log(\Delta/\epsilon)
)+ \epsilon$ in time $n^{ \mathcal{O}(1)} \cdot 2^{\tilde{\mathcal{O}}( k^2
(\log(\Delta)/\epsilon)^{k/2 + 1} ) }$.
</p>
<p>Our approach is based on a novel analysis of a conditioning-based rounding
scheme for the Sherali-Adams LP Hierarchy. Crucially, our analysis exploits the
geometry of low-dimensional Euclidean space, allowing us to avoid an
exponential dependence on the aspect ratio $\Delta$. We believe our
geometry-aware treatment of the Sherali-Adams Hierarchy is an important step
towards developing general-purpose techniques for efficient metric optimization
algorithms.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentComputational Complexity: The Engineer and The Computer Scientisttag:blogger.com,1999:blog-3722233.post-18691732346772470492023-11-29T15:43:00+00:00
<p>What is the difference between engineering and computer science? CS is not an engineering field, though there is some overlap. It's not the digital versus physical. To capture the difference we can look two tech CEOs dominating the news recently, Elon Musk and Sam Altman.</p><p>Engineers are problem solvers creating or improving technologies to reach a goal. Tesla has led the way for cars to become electric, connected and autonomous. SpaceX has develops highly capable rocket technology at lower cost. Ultimately though Tesla makes cars that go from A to B and SpaceX sends stuff into space. When Musk bought Twitter he focused on the engineering, but Twitter's challenges were not in its engineering and this is why Twitter/X has been floundering.</p><p>Computer scientists make platforms. The Internet, cloud computing, programming languages, mobile computing, cryptography, databases, social networks even NP-completeness don't focus on individual problems, rather they create environments that users and developers can apply to a variety of new applications and challenges.</p><p>AI is no exception. OpenAI has no specific goal or purpose it is trying to solve. There's AGI but that's more of a vague benchmark that may (or may not) be passed as ChatGPT and its successors continue to improve.</p><p>AWS, Python, Apple and OpenAI all have developers conferences. Tesla and SpaceX do not. Elon Musk has actually made Twitter/X harder for developers to build on. I don't hold high hopes for <a href="https://www.cnbc.com/2023/11/05/elon-musk-debuts-grok-ai-bot-to-rival-chatgpt-others-.html">Grok</a>.</p><p>It's not a perfect division, many engineers create platforms and computer scientists tackle specific problems. Nevertheless it's a good way to see the distinction between the fields.</p><p>Don't think the CS way is always the better way. You have less control of platforms, they can act in unexpected ways and people can use them unintentionally or intentionally to cause harm. Mitigating those harms is a challenge we must continuously address.</p><p class="authors">By Lance Fortnow</p>
Computational Complexityhttp://blog.computationalcomplexity.org/CCI: jobs: PhD position in Quantum Learning Theory at University of Warwick (apply by January 1, 2024)http://cstheory-jobs.org/2023/11/29/phd-position-in-quantum-learning-theory-at-university-of-warwick-apply-by-january-1-2024/2023-11-29T08:17:29+00:00
<p>One funded PhD position is available in the group of Dr Matthias C. Caro, who will join the University of Warwick, UK, in Fall 2024. This is an excellent opportunity to join one of the most active CS theory groups in the UK and to become part of the interdisciplinary research initiative Warwick Quantum. Candidates interested in quantum computing and learning theory are encouraged to apply.</p>
<p>Website: <a href="https://warwick.ac.uk/fac/sci/dcs/news/?newsItem=8a1785d88bf6b075018c15b1e42170fb">https://warwick.ac.uk/fac/sci/dcs/news/?newsItem=8a1785d88bf6b075018c15b1e42170fb</a><br />
Email: matthias.caro@fu-berlin.de</p>
<p class="authors">By shacharlovett</p>
CCI: jobshttps://cstheory-jobs.orgarXiv: Computational Complexity: Parameterized Inapproximability Hypothesis under ETHhttp://arxiv.org/abs/2311.165872023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Guruswami_V/0/1/0/all/0/1">Venkatesan Guruswami</a>, <a href="http://arxiv.org/find/cs/1/au:+Lin_B/0/1/0/all/0/1">Bingkai Lin</a>, <a href="http://arxiv.org/find/cs/1/au:+Ren_X/0/1/0/all/0/1">Xuandi Ren</a>, <a href="http://arxiv.org/find/cs/1/au:+Sun_Y/0/1/0/all/0/1">Yican Sun</a>, <a href="http://arxiv.org/find/cs/1/au:+Wu_K/0/1/0/all/0/1">Kewen Wu</a></p><p>The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed
parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance,
parameterized by the number of variables, from one where every assignment fails
to satisfy an $\varepsilon$ fraction of constraints for some absolute constant
$\varepsilon > 0$. PIH plays the role of the PCP theorem in parameterized
complexity. However, PIH has only been established under Gap-ETH, a very strong
assumption with an inherent gap.
</p>
<p>In this work, we prove PIH under the Exponential Time Hypothesis (ETH). This
is the first proof of PIH from a gap-free assumption. Our proof is
self-contained and elementary. We identify an ETH-hard CSP whose variables take
vector values, and constraints are either linear or of a special parallel
structure. Both kinds of constraints can be checked with constant soundness via
a "parallel PCP of proximity" based on the Walsh-Hadamard code.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Fair Interventions in Weighted Congestion Gameshttp://arxiv.org/abs/2311.167602023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Fischer_M/0/1/0/all/0/1">Miriam Fischer</a>, <a href="http://arxiv.org/find/cs/1/au:+Gairing_M/0/1/0/all/0/1">Martin Gairing</a>, <a href="http://arxiv.org/find/cs/1/au:+Paccagnan_D/0/1/0/all/0/1">Dario Paccagnan</a></p><p>In this work we study the power and limitations of fair interventions in
weighted congestion games. Specifically, we focus on interventions that aim at
improving the equilibrium quality (price of anarchy) and are fair in the sense
that identical players receive identical treatment. Within this setting, we
provide three key contributions: First, we show that no fair intervention can
reduce the price of anarchy below a given factor depending solely on the class
of latencies considered. Interestingly, this lower bound is unconditional,
i.e., it applies regardless of how much computation interventions are allowed
to use. Second, we propose a taxation mechanism that is fair and show that the
resulting price of anarchy matches this lower bound, while the mechanism can be
efficiently computed in polynomial time. Third, we complement these results by
showing that no intervention (fair or not) can achieve a better approximation
if polynomial computability is required. We do so by proving that the minimum
social cost is NP-hard to approximate below a factor identical to the one
previously introduced. In doing so, we also show that the randomized algorithm
proposed by Makarychev and Sviridenko (Journal of the ACM, 2018) for the class
of optimization problems with a "diseconomy of scale" is optimal, and provide a
novel way to derandomize its solution via equilibrium computation.
</p>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Data Structures and Algorithms: Matrix Multiplication in Quadratic Time and Energy? Towards a Fine-Grained Energy-Centric Church-Turing Thesishttp://arxiv.org/abs/2311.163422023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Valiant_G/0/1/0/all/0/1">Gregory Valiant</a></p><p>We describe two algorithms for multiplying n x n matrices using time and
energy n^2 polylog(n) under basic models of classical physics. The first
algorithm is for multiplying integer-valued matrices, and the second, quite
different algorithm, is for Boolean matrix multiplication. We hope this work
inspires a deeper consideration of physically plausible/realizable models of
computing that might allow for algorithms which improve upon the runtimes and
energy usages suggested by the parallel RAM model in which each operation
requires one unit of time and one unit of energy.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Property Testing with Online Adversarieshttp://arxiv.org/abs/2311.165662023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Ben_Eliezer_O/0/1/0/all/0/1">Omri Ben-Eliezer</a>, <a href="http://arxiv.org/find/cs/1/au:+Kelman_E/0/1/0/all/0/1">Esty Kelman</a>, <a href="http://arxiv.org/find/cs/1/au:+Meir_U/0/1/0/all/0/1">Uri Meir</a>, <a href="http://arxiv.org/find/cs/1/au:+Raskhodnikova_S/0/1/0/all/0/1">Sofya Raskhodnikova</a></p><p>The online manipulation-resilient testing model, proposed by Kalemaj,
Raskhodnikova and Varma (ITCS 2022 and Theory of Computing 2023), studies
property testing in situations where access to the input degrades continuously
and adversarially. Specifically, after each query made by the tester is
answered, the adversary can intervene and either erase or corrupt $t$ data
points. In this work, we investigate a more nuanced version of the online model
in order to overcome old and new impossibility results for the original model.
We start by presenting an optimal tester for linearity and a lower bound for
low-degree testing of Boolean functions in the original model. We overcome the
lower bound by allowing batch queries, where the tester gets a group of queries
answered between manipulations of the data. Our batch size is small enough so
that function values for a single batch on their own give no information about
whether the function is of low degree. Finally, to overcome the impossibility
results of Kalemaj et al. for sortedness and the Lipschitz property of
sequences, we extend the model to include $t<1$, i.e., adversaries that make
less than one erasure per query. For sortedness, we characterize the rate of
erasures for which online testing can be performed, exhibiting a sharp
transition from optimal query complexity to impossibility of testability (with
any number of queries). Our online tester works for a general class of local
properties of sequences. One feature of our results is that we get new (and in
some cases, simpler) optimal algorithms for several properties in the standard
property testing model.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: An Exploration of Left-Corner Transformationshttp://arxiv.org/abs/2311.162582023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Opedal_A/0/1/0/all/0/1">Andreas Opedal</a>, <a href="http://arxiv.org/find/cs/1/au:+Tsipidi_E/0/1/0/all/0/1">Eleftheria Tsipidi</a>, <a href="http://arxiv.org/find/cs/1/au:+Pimentel_T/0/1/0/all/0/1">Tiago Pimentel</a>, <a href="http://arxiv.org/find/cs/1/au:+Cotterell_R/0/1/0/all/0/1">Ryan Cotterell</a>, <a href="http://arxiv.org/find/cs/1/au:+Vieira_T/0/1/0/all/0/1">Tim Vieira</a></p><p>The left-corner transformation (Rosenkrantz and Lewis, 1970) is used to
remove left recursion from context-free grammars, which is an important step
towards making the grammar parsable top-down with simple techniques. This paper
generalizes prior left-corner transformations to support semiring-weighted
production rules and to provide finer-grained control over which left corners
may be moved. Our generalized left-corner transformation (GLCT) arose from
unifying the left-corner transformation and speculation transformation (Eisner
and Blatz, 2007), originally for logic programming. Our new transformation and
speculation define equivalent weighted languages. Yet, their derivation trees
are structurally different in an important way: GLCT replaces left recursion
with right recursion, and speculation does not. We also provide several
technical results regarding the formal relationships between the outputs of
GLCT, speculation, and the original grammar. Lastly, we empirically investigate
the efficiency of GLCT for left-recursion elimination from grammars of nine
languages.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: On the quantum time complexity of divide and conquerhttp://arxiv.org/abs/2311.164012023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/quant-ph/1/au:+Allcock_J/0/1/0/all/0/1">Jonathan Allcock</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Bao_J/0/1/0/all/0/1">Jinge Bao</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Belovs_A/0/1/0/all/0/1">Aleksandrs Belovs</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Lee_T/0/1/0/all/0/1">Troy Lee</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Santha_M/0/1/0/all/0/1">Miklos Santha</a></p><p>We initiate a systematic study of the time complexity of quantum divide and
conquer algorithms for classical problems. We establish generic conditions
under which search and minimization problems with classical divide and conquer
algorithms are amenable to quantum speedup and apply these theorems to an array
of problems involving strings, integers, and geometric objects. They include
LONGEST DISTINCT SUBSTRING, KLEE'S COVERAGE, several optimization problems on
stock transactions, and k-INCREASING SUBSEQUENCE. For most of these results,
our quantum time upper bound matches the quantum query lower bound for the
problem, up to polylogarithmic factors.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: A Combinatorial Approach to Robust PCAhttp://arxiv.org/abs/2311.164162023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Kong_W/0/1/0/all/0/1">Weihao Kong</a>, <a href="http://arxiv.org/find/cs/1/au:+Qiao_M/0/1/0/all/0/1">Mingda Qiao</a>, <a href="http://arxiv.org/find/cs/1/au:+Sen_R/0/1/0/all/0/1">Rajat Sen</a></p><p>We study the problem of recovering Gaussian data under adversarial
corruptions when the noises are low-rank and the corruptions are on the
coordinate level. Concretely, we assume that the Gaussian noises lie in an
unknown $k$-dimensional subspace $U \subseteq \mathbb{R}^d$, and $s$ randomly
chosen coordinates of each data point fall into the control of an adversary.
This setting models the scenario of learning from high-dimensional yet
structured data that are transmitted through a highly-noisy channel, so that
the data points are unlikely to be entirely clean.
</p>
<p>Our main result is an efficient algorithm that, when $ks^2 = O(d)$, recovers
every single data point up to a nearly-optimal $\ell_1$ error of $\tilde
O(ks/d)$ in expectation. At the core of our proof is a new analysis of the
well-known Basis Pursuit (BP) method for recovering a sparse signal, which is
known to succeed under additional assumptions (e.g., incoherence or the
restricted isometry property) on the underlying subspace $U$. In contrast, we
present a novel approach via studying a natural combinatorial problem and show
that, over the randomness in the support of the sparse signal, a
high-probability error bound is possible even if the subspace $U$ is arbitrary.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: l2Match: Optimization Techniques on Subgraph Matching Algorithm using Label Pair, Neighboring Label Index, and Jump-Redo methodhttp://arxiv.org/abs/2311.166032023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Cheng_C/0/1/0/all/0/1">C. Q. Cheng</a>, <a href="http://arxiv.org/find/cs/1/au:+Wong_K/0/1/0/all/0/1">K. S. Wong</a>, <a href="http://arxiv.org/find/cs/1/au:+Soon_L/0/1/0/all/0/1">L. K. Soon</a></p><p>Graph database is designed to store bidirectional relationships between
objects and facilitate the traversal process to extract a subgraph. However,
the subgraph matching process is an NP-Complete problem. Existing solutions to
this problem usually employ a filter-and-verification framework and a
divide-and-conquer method. The filter-and-verification framework minimizes the
number of inputs to the verification stage by filtering and pruning invalid
candidates as much as possible. Meanwhile, subgraph matching is performed on
the substructure decomposed from the larger graph to yield partial embedding.
Subsequently, the recursive traversal or set intersection technique combines
the partial embedding into a complete subgraph. In this paper, we first present
a comprehensive literature review of the state-of-the-art solutions. l2Match, a
subgraph isomorphism algorithm for small queries utilizing a Label-Pair Index
and filtering method, is then proposed and presented as a proof of concept.
Empirical experimentation shows that l2Match outperforms related
state-of-the-art solutions, and the proposed methods optimize the existing
algorithms.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: $k$-times bin packing and its application to fair electricity distributionhttp://arxiv.org/abs/2311.167422023-11-29T01:30:00+00:00
<p class="arxiv-authors"><b>Authors:</b> <a href="http://arxiv.org/find/cs/1/au:+Baghel_D/0/1/0/all/0/1">Dinesh Kumar Baghel</a>, <a href="http://arxiv.org/find/cs/1/au:+Segal_Halevi_E/0/1/0/all/0/1">Erel Segal-Halevi</a></p><p>Given items of different sizes and a fixed bin capacity, the bin-packing
problem is to pack these items into a minimum number of bins such that the sum
of item sizes in a bin does not exceed the capacity. We define a new variant
called $k$-times bin packing ($k$BP), where the goal is to pack the items such
that each item appears exactly $k$ times, in $k$ different bins. We generalize
some existing approximation algorithms for bin-packing to solve $k$BP, and
analyze their performance ratio.
</p>
<p>The study of $k$BP is motivated by the problem of fair electricity
distribution. In many developing countries, the total electricity demand is
higher than the supply capacity. We show that $k$-times bin packing can be used
to distribute the electricity in a fair and efficient way. Particularly, we
implement generalizations of the First-Fit and First-Fit-Decreasing bin-packing
algorithms to solve $k$BP, and apply the generalizations to real electricity
demand data. We show that our generalizations outperform existing heuristic
solutions to the same problem.
</p>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recent