Theory of Computing ReportPluto 1.6.2 on Ruby 3.0.6 (2023-03-30) [x86_64-linux]Computational Complexity: Favorite Theorems: Quantum Proverstag:blogger.com,1999:blog-3722233.post-79969086228793270942024-04-18T13:11:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>For our next favorite theorem, we look at the surprising power of provers who share entangled bits. If you can prove something to an arbitrarily computable verifier, then two entangled provers can convince a polynomial-time verifier.</p>
<p style="text-align: center;"><a href="https://arxiv.org/abs/2001.04383">MIP* = RE</a><br>Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen</p>
<p style="text-align: left;">A quick tour:</p>
<p style="text-align: left;"></p>
<ul style="text-align: left;">
<li>A powerful prover convincing a skeptical computable deterministic verifier is another way of capturing computably enumerable (traditionally called recursively enumerable or RE). You can convince a verifier that a program halts by giving the halting time, and the verifier can simulate the machine for that many steps.</li>
<li>A powerful prover convincing a skeptical polytime deterministic verifier is another way of capturing the class NP, like giving a 3-coloring of a graph that can be easily checked.</li>
<li>If you allow the verifier to ask random questions, you can convince the verifier with high confidence that a graph is not colorable, or more generally any PSPACE problem.</li>
<li>If you add two separated provers that a skeptical probabilistic verifier can play off each other, the provers can convince the verifier that any problem in NEXP, non-deterministic exponential time.</li>
</ul>
<div>One of many quantum variations of interactive proofs, MIP* has two provers that cannot communicate but have entangled quantum bits. This change could go either way: </div>
<div><ul style="text-align: left;">
<li>The provers to coordinate their answers better and so they wouldn't convince the verifier for all the languages in NEXP</li>
<li>The verifier could ask more complex questions to the provers which they could answer using the entanglement, allowing the provers to convince the verifier for even more complex languages.</li>
</ul></div>
<div>Turns out it's the later in a very strong way.</div>
<div><br></div>
<div>
<a href="https://doi.org/10.1109/FOCS.2012.11">Ito and Vidick</a> showed that you can create a protocol that prevents the provers coordinating better, recovering all problems in NEXP. <a href="https://doi.org/10.1109/FOCS.2019.00039">Vidick and Wright</a> showed you can ask more questions, showing that provers with entangled bits can convince a verifier of everything in NEEXP, non-deterministic double exponential time (\(2^{2^{n^c}}\)), already a proof too large for the verifier to even point into. The MIP* = RE paper takes that all the way to the computably enumerable sets, all the languages you would get with a classical prover convincing a deterministic verifier unrestricted by time.</div>
<p></p>
<p class="authors">By Lance Fortnow</p>
</body></html>
Computational Complexityhttp://blog.computationalcomplexity.org/CS Theory Events: TCS for All Rising Star Workshop and Travel Scholarshipshttp://cstheory-events.org/2024/04/18/tcs-for-all-rising-star-workshop-and-travel-scholarships/2024-04-18T07:45:31+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>April 18-30, 2024 Vancouver, Canada https://sigact.org/tcsforall/ Submission deadline: April 28, 2024 Nomination deadline for TCS for All Spotlight Workshop and travel scholarships to attend STOC are on April 28th. Nominate your students and postdocs who will be in the job market in near future. Since 2018, TCS for All (previously TCS for Women) is supporting … <a href="https://cstheory-events.org/2024/04/18/tcs-for-all-rising-star-workshop-and-travel-scholarships/" class="more-link">Continue reading <span class="screen-reader-text">TCS for All Rising Star Workshop and Travel Scholarships</span></a></p>
<p class="authors">By shacharlovett</p>
</body></html>
CS Theory Eventshttps://cstheory-events.orgarXiv: Computational Complexity: Finding $d$-Cuts in Graphs of Bounded Diameter, Graphs of Bounded Radius
and $H$-Free Graphshttp://arxiv.org/abs/2404.11389v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Felicia+Lucke">Felicia Lucke</a>, <a href="https://dblp.uni-trier.de/search?q=Ali+Momeni">Ali Momeni</a>, <a href="https://dblp.uni-trier.de/search?q=Dani%C3%ABl+Paulusma">Daniël Paulusma</a>, <a href="https://dblp.uni-trier.de/search?q=Siani+Smith">Siani Smith</a></p>The $d$-Cut problem is to decide if a graph has an edge cut such that each
vertex has at most $d$ neighbours at the opposite side of the cut. If $d=1$, we
obtain the intensively studied Matching Cut problem. The $d$-Cut problem has
been studied as well, but a systematic study for special graph classes was
lacking. We initiate such a study and consider classes of bounded diameter,
bounded radius and $H$-free graphs. We prove that for all $d\geq 2$, $d$-Cut is
polynomial-time solvable for graphs of diameter $2$, $(P_3+P_4)$-free graphs
and $P_5$-free graphs. These results extend known results for $d=1$. However,
we also prove several NP-hardness results for $d$-Cut that contrast known
polynomial-time results for $d=1$. Our results lead to full dichotomies for
bounded diameter and bounded radius and to almost-complete dichotomies for
$H$-free graphs.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Witnessing Flows in Arithmetichttp://arxiv.org/abs/2404.11218v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Amirhossein+Akbar+Tabatabai">Amirhossein Akbar Tabatabai</a></p>One of the elegant achievements in the history of proof theory is the
characterization of the provably total recursive functions of an arithmetical
theory by its proof-theoretic ordinal as a way to measure the time complexity
of the functions. Unfortunately, the machinery is not sufficiently fine-grained
to be applicable on the weak theories on the one hand and to capture the
bounded functions with bounded definitions of strong theories, on the other. In
this paper, we develop such a machinery to address the bounded theorems of both
strong and weak theories of arithmetic. In the first part, we provide a refined
version of ordinal analysis to capture the feasibly definable and bounded
functions that are provably total in $\mathrm{PA}+\bigcup_{\beta \prec \alpha}
\mathrm{TI}(\prec_{\beta})$, the extension of Peano arithmetic by transfinite
induction up to the ordinals below $\alpha$. Roughly speaking, we identify the
functions as the ones that are computable by a sequence of
$\mathrm{PV}$-provable polynomial time modifications on an initial polynomial
time value, where the computational steps are indexed by the ordinals below
$\alpha$, decreasing by the modifications. In the second part, and choosing $l
\leq k$, we use similar technique to capture the functions with bounded
definitions in the theory $T^k_2$ (resp. $S^k_2$) as the functions computable
by exponentially (resp. polynomially) long sequence of
$\mathrm{PV}_{k-l+1}$-provable reductions between $l$-turn games starting with
an explicit $\mathrm{PV}_{k-l+1}$-provable winning strategy for the first game.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Review of Automaton Learning Algorithms with Polynomial Complexity --
Completely Solved Exampleshttp://arxiv.org/abs/2404.11096v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Farah+Haneef">Farah Haneef</a></p>Automaton learning is a domain in which the target system is inferred by the
automaton learning algorithm in the form of an automaton, by synthesizing a
finite number of inputs and their corresponding outputs. Automaton learning
makes use of a Minimally Adequate Teacher (MAT). The learner learns the target
system by posing membership queries to the MAT. In this chapter, I have
provided completely solved examples of automaton learning algorithms. According
to the best of my knowledge these are not available in any other source.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Chernoff Bounds and Reverse Hypercontractivity on HDXhttp://arxiv.org/abs/2404.10961v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Yotam+Dikstein">Yotam Dikstein</a>, <a href="https://dblp.uni-trier.de/search?q=Max+Hopkins">Max Hopkins</a></p>We prove optimal concentration of measure for lifted functions on high
dimensional expanders (HDX). Let $X$ be a $k$-dimensional HDX. We show for any
$i\leq k$ and $f:X(i)\to [0,1]$: \[\Pr_{s\in
X(k)}\left[\left|\underset{{t\subseteq
s}}{\mathbb{E}}[f(t)]-\mu\right|\geq\varepsilon\right]\leq
exp\left(-\varepsilon^2\frac{k}{i}\right).\] Using this fact, we prove that
high dimensional expanders are reverse hypercontractive, a powerful functional
inequality from discrete analysis implying that for any sets $A,B \subset
X(k)$, the probability a $\rho$-correlated pair passes between them is at least
\[\Pr_{s,s' \sim T_\rho}[s \in A, s' \in B] \geq \Pr[A]^{O(1)} \Pr[B]^{O(1)}.\]
Our results hold under weak spectral assumptions on $X$. Namely we prove
exponential concentration of measure for any complex below the `Trickling-Down
Threshold' (beyond which concentration may be arbitrarily poor), and optimal
concentration for $\sqrt{k}$-skeletons of such complexes. We also show optimal
bounds for the top dimension of stronger HDX among other settings. We leverage
our inequalities to prove several new agreement testing theorems on high
dimensional expanders, including a new 99%-regime test for subsets, and a
variant of the `Z-test' achieving inverse exponential soundness under the
stronger assumption of $\ell_\infty$-expansion. The latter gives rise to the
first optimal testers beyond the complete complex and products, a stepping
stone toward the use of HDX in strong soundness PCPs. We also give applications
within expansion, analysis, combinatorics, and coding theory, including a proof
that two-sided HDX have optimal geometric overlap (giving the first explicit
bounded-degree construction), near-optimal double samplers, new
super-exponential degree lower bounds for certain HDX, distance-amplified
list-decodable and locally testable codes, a Frankl-R\"odl Theorem and more.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: On approximability of the Permanent of PSD matriceshttp://arxiv.org/abs/2404.10959v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Farzam+Ebrahimnejad">Farzam Ebrahimnejad</a>, <a href="https://dblp.uni-trier.de/search?q=Ansh+Nagda">Ansh Nagda</a>, <a href="https://dblp.uni-trier.de/search?q=Shayan+Oveis+Gharan">Shayan Oveis Gharan</a></p>We study the complexity of approximating the permanent of a positive
semidefinite matrix $A\in \mathbb{C}^{n\times n}$.
1. We design a new approximation algorithm for $\mathrm{per}(A)$ with
approximation ratio $e^{(0.9999 + \gamma)n}$, exponentially improving upon the
current best bound of $e^{(1+\gamma-o(1))n}$ [AGOS17,YP22]. Here, $\gamma
\approx 0.577$ is Euler's constant.
2. We prove that it is NP-hard to approximate $\mathrm{per}(A)$ within a
factor $e^{(\gamma-\epsilon)n}$ for any $\epsilon>0$. This is the first
exponential hardness of approximation for this problem. Along the way, we prove
optimal hardness of approximation results for the $\|\cdot\|_{2\to q}$ ``norm''
problem of a matrix for all $-1 < q < 2$.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Constructing $\mathrm{NP}^{\mathord{\#}\mathrm P}$-complete problems and
${\mathord{\#}\mathrm P}$-hardness of circuit extraction in phase-free ZHhttp://arxiv.org/abs/2404.10913v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Piotr+Mitosek">Piotr Mitosek</a></p>The ZH calculus is a graphical language for quantum computation reasoning.
The phase-free variant offers a simple set of generators that guarantee
universality. ZH calculus is effective in MBQC and analysis of quantum circuits
constructed with the universal gate set Toffoli+H. While circuits naturally
translate to ZH diagrams, finding an ancilla-free circuit equivalent to a given
diagram is hard. Here, we show that circuit extraction for phase-free ZH
calculus is ${\mathord{\#}\mathrm P}$-hard, extending the existing result for
ZX calculus. Another problem believed to be hard is comparing whether two
diagrams represent the same process. We show that two closely related problems
are $\mathrm{NP}^{\mathord{\#}\mathrm P}$-complete. The first problem is: given
two processes represented as diagrams, determine the existence of a
computational basis state on which they equalize. The second problem is
checking whether the matrix representation of a given diagram contains an entry
equal to a given number. Our proof adapts the proof of Cook-Levin theorem to a
reduction from a non-deterministic Turing Machine with access to
${\mathord{\#}\mathrm P}$ oracle.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Constant-Depth Arithmetic Circuits for Linear Algebra Problemshttp://arxiv.org/abs/2404.10839v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Robert+Andrews">Robert Andrews</a>, <a href="https://dblp.uni-trier.de/search?q=Avi+Wigderson">Avi Wigderson</a></p>We design polynomial size, constant depth (namely, $\mathsf{AC}^0$)
arithmetic formulae for the greatest common divisor (GCD) of two polynomials,
as well as the related problems of the discriminant, resultant, B\'ezout
coefficients, squarefree decomposition, and the inversion of structured
matrices like Sylvester and B\'ezout matrices. Our GCD algorithm extends to any
number of polynomials. Previously, the best known arithmetic formulae for these
problems required super-polynomial size, regardless of depth.
These results are based on new algorithmic techniques to compute various
symmetric functions in the roots of polynomials, as well as manipulate the
multiplicities of these roots, without having access to them. These techniques
allow $\mathsf{AC}^0$ computation of a large class of linear and polynomial
algebra problems, which include the above as special cases.
We extend these techniques to problems whose inputs are multivariate
polynomials, which are represented by $\mathsf{AC}^0$ arithmetic circuits. Here
too we solve problems such as computing the GCD and squarefree decomposition in
$\mathsf{AC}^0$.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Data Structures and Algorithms: Private federated discovery of out-of-vocabulary words for Gboardhttp://arxiv.org/abs/2404.11607v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ziteng+Sun">Ziteng Sun</a>, <a href="https://dblp.uni-trier.de/search?q=Peter+Kairouz">Peter Kairouz</a>, <a href="https://dblp.uni-trier.de/search?q=Haicheng+Sun">Haicheng Sun</a>, <a href="https://dblp.uni-trier.de/search?q=Adria+Gascon">Adria Gascon</a>, <a href="https://dblp.uni-trier.de/search?q=Ananda+Theertha+Suresh">Ananda Theertha Suresh</a></p>The vocabulary of language models in Gboard, Google's keyboard application,
plays a crucial role for improving user experience. One way to improve the
vocabulary is to discover frequently typed out-of-vocabulary (OOV) words on
user devices. This task requires strong privacy protection due to the sensitive
nature of user input data. In this report, we present a private OOV discovery
algorithm for Gboard, which builds on recent advances in private federated
analytics. The system offers local differential privacy (LDP) guarantees for
user contributed words. With anonymous aggregation, the final released words
satisfy central differential privacy guarantees with $\epsilon = 0.315, \delta
= 10^{-10}$ for OOV discovery in en-US (English in United States).</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: The EDGE Language: Extended General Einsums for Graph Algorithmshttp://arxiv.org/abs/2404.11591v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Toluwanimi+O.+Odemuyiwa">Toluwanimi O. Odemuyiwa</a>, <a href="https://dblp.uni-trier.de/search?q=Joel+S.+Emer">Joel S. Emer</a>, <a href="https://dblp.uni-trier.de/search?q=John+D.+Owens">John D. Owens</a></p>In this work, we propose a unified abstraction for graph algorithms: the
Extended General Einsums language, or EDGE. The EDGE language expresses graph
algorithms in the language of tensor algebra, providing a rigorous, succinct,
and expressive mathematical framework. EDGE leverages two ideas: (1) the
well-known foundations provided by the graph-matrix duality, where a graph is
simply a 2D tensor, and (2) the power and expressivity of Einsum notation in
the tensor algebra world. In this work, we describe our design goals for EDGE
and walk through the extensions we add to Einsums to support more complex
operations common in graph algorithms. Additionally, we provide a few examples
of how to express graph algorithms in our proposed notation. We hope that a
single, mathematical notation for graph algorithms will (1) allow researchers
to more easily compare different algorithms and different implementations of a
graph algorithm; (2) enable developers to factor complexity by separating the
concerns of what to compute (described with the extended Einsum notation) from
the lower level details of how to compute; and (3) enable the discovery of
different algorithmic variants of a problem through algebraic manipulations and
transformations on a given EDGE expression.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Testing Intersectingness of Uniform Familieshttp://arxiv.org/abs/2404.11504v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ishay+Haviv">Ishay Haviv</a>, <a href="https://dblp.uni-trier.de/search?q=Michal+Parnas">Michal Parnas</a></p>A set family ${\cal F}$ is called intersecting if every two members of ${\cal
F}$ intersect, and it is called uniform if all members of ${\cal F}$ share a
common size. A uniform family ${\cal F} \subseteq \binom{[n]}{k}$ of
$k$-subsets of $[n]$ is $\varepsilon$-far from intersecting if one has to
remove more than $\varepsilon \cdot \binom{n}{k}$ of the sets of ${\cal F}$ to
make it intersecting. We study the property testing problem that given query
access to a uniform family ${\cal F} \subseteq \binom{[n]}{k}$, asks to
distinguish between the case that ${\cal F}$ is intersecting and the case that
it is $\varepsilon$-far from intersecting. We prove that for every fixed
integer $r$, the problem admits a non-adaptive two-sided error tester with
query complexity $O(\frac{\ln n}{\varepsilon})$ for $\varepsilon \geq \Omega(
(\frac{k}{n})^r)$ and a non-adaptive one-sided error tester with query
complexity $O(\frac{\ln k}{\varepsilon})$ for $\varepsilon \geq \Omega(
(\frac{k^2}{n})^r)$. The query complexities are optimal up to the logarithmic
terms. For $\varepsilon \geq \Omega( (\frac{k^2}{n})^2)$, we further provide a
non-adaptive one-sided error tester with optimal query complexity of
$O(\frac{1}{\varepsilon})$. Our findings show that the query complexity of the
problem differs substantially from that of testing intersectingness of
non-uniform families, studied recently by Chen, De, Li, Nadimpalli, and
Servedio (ITCS, 2024).</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Sinking an Algorithmic Isthmus: (1 + ε)-Approximate Min-Sum
Subset Convolutionhttp://arxiv.org/abs/2404.11364v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Mihail+Stoian">Mihail Stoian</a></p>Given functions $f$ and $g$ defined on the subset lattice of order $n$, their
min-sum subset convolution, defined for all $S \subseteq [n]$ as \[
(f \star g)(S) = \min_{T \subseteq S}\:\big(f(T) + g(S \setminus T)\big), \]
is a fundamental tool in parameterized algorithms. However, since its na\"ive
$O(3^n)$-time evaluation is also the fastest known, it has been used only in
settings where the input functions have a bounded integer range $\{-M, \ldots,
M\}$. In this case, the running time becomes $\tilde O(2^n M)$ by resorting to
fast subset convolution in the sum-product ring. This is disadvantageous due to
the dependence on $M$, limiting its practicality.
In this light, we study whether the problem admits an $(1 +
\varepsilon)$-approximation scheme in time independent of $M$. Our main result
is the first $\tilde O(2^\frac{3n}{2} / \sqrt{\varepsilon})$-time algorithm for
the $(1 + \varepsilon)$-approximate min-sum subset convolution. To show its
applicability, we present $(1 + \varepsilon)$-approximation schemes in the same
exponential time bound for several NP-hard problems using this convolution,
such as the minimum-cost $k$-coloring problem -- in time $\tilde
O(2^\frac{3n}{2} / \sqrt{\varepsilon})$, and the prize-collecting Steiner tree
problem -- in time $\tilde O(2^\frac{3s^+}{2} / \sqrt{\varepsilon})$, where $n$
is the number of vertices and $s^+$ is the number of proper potential
terminals. We also discuss two other applications in computational biology.
Our algorithms lie at the intersection of two lines of research that have
been considered separately: $\textit{sequence}$ and $\textit{subset}$
convolutions in semi-rings. In particular, we extend the recent framework of
Bringmann, K\"unnemann, and W\k{e}grzycki [STOC 2019] to the context of subset
convolutions.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: On Learning Parities with Dependent Noisehttp://arxiv.org/abs/2404.11325v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Noah+Golowich">Noah Golowich</a>, <a href="https://dblp.uni-trier.de/search?q=Ankur+Moitra">Ankur Moitra</a>, <a href="https://dblp.uni-trier.de/search?q=Dhruv+Rohatgi">Dhruv Rohatgi</a></p>In this expository note we show that the learning parities with noise (LPN)
assumption is robust to weak dependencies in the noise distribution of small
batches of samples. This provides a partial converse to the linearization
technique of [AG11]. The material in this note is drawn from a recent work by
the authors [GMR24], where the robustness guarantee was a key component in a
cryptographic separation between reinforcement learning and supervised
learning.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Distribution-Free Testing of Decision Lists with a Sublinear Number of
Querieshttp://arxiv.org/abs/2404.11103v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Xi+Chen">Xi Chen</a>, <a href="https://dblp.uni-trier.de/search?q=Yumou+Fei">Yumou Fei</a>, <a href="https://dblp.uni-trier.de/search?q=Shyamal+Patel">Shyamal Patel</a></p>We give a distribution-free testing algorithm for decision lists with
$\tilde{O}(n^{11/12}/\varepsilon^3)$ queries. This is the first sublinear
algorithm for this problem, which shows that, unlike halfspaces, testing is
strictly easier than learning for decision lists. Complementing the algorithm,
we show that any distribution-free tester for decision lists must make
$\tilde{\Omega}(\sqrt{n})$ queries, or draw $\tilde{\Omega}(n)$ samples when
the algorithm is sample-based.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Online Algorithms with Limited Data Retentionhttp://arxiv.org/abs/2404.10997v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Nicole+Immorlica">Nicole Immorlica</a>, <a href="https://dblp.uni-trier.de/search?q=Brendan+Lucier">Brendan Lucier</a>, <a href="https://dblp.uni-trier.de/search?q=Markus+Mobius">Markus Mobius</a>, <a href="https://dblp.uni-trier.de/search?q=James+Siderius">James Siderius</a></p>We introduce a model of online algorithms subject to strict constraints on
data retention. An online learning algorithm encounters a stream of data
points, one per round, generated by some stationary process. Crucially, each
data point can request that it be removed from memory $m$ rounds after it
arrives. To model the impact of removal, we do not allow the algorithm to store
any information or calculations between rounds other than a subset of the data
points (subject to the retention constraints). At the conclusion of the stream,
the algorithm answers a statistical query about the full dataset. We ask: what
level of performance can be guaranteed as a function of $m$?
We illustrate this framework for multidimensional mean estimation and linear
regression problems. We show it is possible to obtain an exponential
improvement over a baseline algorithm that retains all data as long as
possible. Specifically, we show that $m = \textsc{Poly}(d, \log(1/\epsilon))$
retention suffices to achieve mean squared error $\epsilon$ after observing
$O(1/\epsilon)$ $d$-dimensional data points. This matches the error bound of
the optimal, yet infeasible, algorithm that retains all data forever. We also
show a nearly matching lower bound on the retention required to guarantee error
$\epsilon$. One implication of our results is that data retention laws are
insufficient to guarantee the right to be forgotten even in a non-adversarial
world in which firms merely strive to (approximately) optimize the performance
of their algorithms.
Our approach makes use of recent developments in the multidimensional random
subset sum problem to simulate the progression of stochastic gradient descent
under a model of adversarial noise, which may be of independent interest.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Drawing Competitive Districts in Redistrictinghttp://arxiv.org/abs/2404.10964v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Gabriel+Chuang">Gabriel Chuang</a>, <a href="https://dblp.uni-trier.de/search?q=Oussama+Hanguir">Oussama Hanguir</a>, <a href="https://dblp.uni-trier.de/search?q=Clifford+Stein">Clifford Stein</a></p>In the process of redistricting, one important metric is the number of
competitive districts, that is, districts where both parties have a reasonable
chance of winning a majority of votes. Competitive districts are important for
achieving proportionality, responsiveness, and other desirable qualities; some
states even directly list competitiveness in their legally-codified districting
requirements. In this work, we discuss the problem of drawing plans with at
least a fixed number of competitive districts. In addition to the standard,
``vote-band'' measure of competitivenesss (i.e., how close was the last
election?), we propose a measure that explicitly considers ``swing voters'' -
the segment of the population that may choose to vote either way, or not vote
at all, in a given election. We present two main, contrasting results. First,
from a computational complexity perspective, we show that the task of drawing
plans with competitive districts is NP-hard, even on very natural instances
where the districting task itself is easy (e.g., small rectangular grids of
population-balanced cells). Second, however, we show that a simple
hill-climbing procedure can in practice find districtings on real states in
which all the districts are competitive. We present the results of the latter
on the precinct-level graphs of the U.S. states of North Carolina and Arizona,
and discuss trade-offs between competitiveness and other desirable qualities.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: The Traveling Tournament Problem: Improved Algorithms Based on Cycle
Packinghttp://arxiv.org/abs/2404.10955v12024-04-18T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Jingyang+Zhao">Jingyang Zhao</a>, <a href="https://dblp.uni-trier.de/search?q=Mingyu+Xiao">Mingyu Xiao</a>, <a href="https://dblp.uni-trier.de/search?q=Chao+Xu">Chao Xu</a></p>The Traveling Tournament Problem (TTP) is a well-known benchmark problem in
the field of tournament timetabling, which asks us to design a double
round-robin schedule such that each pair of teams plays one game in each
other's home venue, minimizing the total distance traveled by all $n$ teams
($n$ is even). TTP-$k$ is the problem with one more constraint that each team
can have at most $k$-consecutive home games or away games. In this paper, we
investigate schedules for TTP-$k$ and analyze the approximation ratio of the
solutions. Most previous schedules were constructed based on a Hamiltonian
cycle of the graph. We will propose a novel construction based on a $k$-cycle
packing. Then, combining our $k$-cycle packing schedule with the Hamiltonian
cycle schedule, we obtain improved approximation ratios for TTP-$k$ with deep
analysis. The case where $k=3$, TTP-3, is one of the most investigated cases.
We improve the approximation ratio of TTP-3 from $(1.667+\varepsilon)$ to
$(1.598+\varepsilon)$, for any $\varepsilon>0$. For TTP-$4$, we improve the
approximation ratio from $(1.750+\varepsilon)$ to $(1.700+\varepsilon)$. By a
refined analysis of the Hamiltonian cycle construction, we also improve the
approximation ratio of TTP-$k$ from $(\frac{5k-7}{2k}+\varepsilon)$ to
$(\frac{5k^2-4k+3}{2k(k+1)}+\varepsilon)$ for any constant $k\geq 5$. Our
methods can be extended to solve a variant called LDTTP-$k$ (TTP-$k$ where all
teams are allocated on a straight line). We show that the $k$-cycle packing
construction can achieve an approximation ratio of
$(\frac{3k-3}{2k-1}+\varepsilon)$, which improves the approximation ratio of
LDTTP-3 from $4/3$ to $(6/5+\varepsilon)$.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentCCI: jobs: PhD Student at Inria Lille (apply by April 29, 2024)http://cstheory-jobs.org/2024/04/17/phd-student-at-inria-lille-apply-by-april-29-2024/2024-04-17T08:13:15+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>Inria Lille and the University of Lille in France have a funding opportunity for a PhD student under the supervision of Antoine Amarilli, Mikaël Monet, and Sylvain Salvati. The PhD will start in fall 2024 for a duration of 3 years. The PhD is about width measures for tractable query evaluation on probabilistic databases; it relates to counting complexity, graph theory, formal languages, and logic.</p>
<p>Website: <a href="https://a3nm.net/work/research/offers/thesis_proposal_query.pdf">https://a3nm.net/work/research/offers/thesis_proposal_query.pdf</a><br>
Email: a3nm.jobs@a3nm.net</p>
<p class="authors">By shacharlovett</p>
</body></html>
CCI: jobshttps://cstheory-jobs.orgTCS+ Seminar Series: Guest post: TCS for All (previously TCS Women) Spotlight Workshop at STOC 2024/Theory Fest: Travel grants and call for speaker nominations.http://tcsplus.wordpress.com/?p=7302024-04-17T00:51:59+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p><em>Guest post by the organizers of <a href="https://sigact.org/tcsforall/">TCS for All </a>(previously TCS Women).</em></p>
<p>You are cordially invited to our TCS for All Spotlight Workshop! The workshop will be held on June 24, 2024, at 12:30 pm – 2 pm, and 4:15 pm – 5:45 pm in Vancouver, Canada as part of the <a href="https://url.au.m.mimecastprotect.com/s/x0lOC1WLPxcnn8Y9LHLizlj?domain=acm-stoc.org/" target="_blank" rel="noreferrer noopener">56th Symposium on Theory of Computing (STOC)</a> and TheoryFest! The workshop is open to all.</p>
<p>More information about the workshop would be made available here: <a href="https://url.au.m.mimecastprotect.com/s/lMqGC2xMQziKK7w90c1GfzL?domain=sigact.org/" target="_blank" rel="noreferrer noopener">https://sigact.org/tcsforall/</a>. In particular, we would like to highlight the TCS for All Travel Scholarships (deadline April 28th) and a call for nominations for Rising Stars talks at the workshop (deadline April 28th). More information on those are below.</p>
<p class="has-text-align-center">Hope to see you in Vancouver!</p>
<p><strong><em>TCS for All Travel Scholarship:</em></strong></p>
<p>TCS for All Travel Scholarships are intended for researchers at the beginning of their career. This scholarship is being made available for minorities in TCS, and anyone who identifies as such is welcome to apply; this scholarship is open to both US and international students. Preference will be given to students at the beginning of their studies. If we have sufficient funding, we will give awards to more senior students and possibly even postdocs.</p>
<p>To apply, you will need to fill out the following form by <strong>April 28th, 2024 </strong>(11:59 pm PDT) in which you provide basic information about yourself, an estimate of your expenses, and a brief statement:</p>
<p><a href="https://url.au.m.mimecastprotect.com/s/gdO8C3QNPBi77ZQEWT2isXq?domain=docs.google.com" target="_blank" rel="noreferrer noopener">Apply for a travel grant here.</a></p>
<p>In addition, you will need to have your advisor (or department head or other faculty mentor if you do not yet have an advisor) send a letter of support to <a href="mailto:tcswomen@gmail.com">tcswomen@gmail.com</a> by April 28th. Your advisor’s letter should also describe the availability of other travel funds. Note for advisors: Specifics about alternative funding are very helpful. Statements like “funding is tight” are not very helpful. This letter should be sent with the subject line “support letter for [your name]”. This is very important. Your application is not complete without this letter.</p>
<p>Late applications (after April 28th) will not be accepted. You will be notified about your status by May 5th, which is prior to the STOC early registration deadline and hotel cut-off deadline.</p>
<p>Notes: Receipts will be required for all travel awards, and reimbursements will be made after the conference. Food or visa expenses will not be reimbursed.</p>
<p><strong><em>Nominations for Rising Star talks:</em></strong></p>
<p>We invite nominations for speakers in our Rising Star talks at the TCS for All Spotlight Workshop at STOC 2024. To be eligible, your nominee has to be a senior PhD student with expected graduation no later than August 2024, or a postdoc in theoretical computer science (all topics represented at STOC are welcome), an underrepresented minority, and not a speaker at a previous TCS Women Spotlight Workshop. Preference will be given to speakers who are currently on the job market for postdoctoral/faculty positions, or who expect to be on the job market in Fall 2024.</p>
<p>You can make your nomination by filling this form by April 28th: <a href="https://url.au.m.mimecastprotect.com/s/hnYoC4QOPEiYYr1vRhWeNpQ?domain=forms.gle" target="_blank" rel="noreferrer noopener">https://forms.gle/WAZsXRm2wuxhdukK7</a></p>
<p class="authors">By plustcs</p>
</body></html>
TCS+ Seminar Serieshttps://tcsplus.wordpress.comarXiv: Computational Complexity: Tetris with Few Piece Typeshttp://arxiv.org/abs/2404.10712v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=MIT+Hardness+Group">MIT Hardness Group</a>, <a href="https://dblp.uni-trier.de/search?q=Erik+D.+Demaine">Erik D. Demaine</a>, <a href="https://dblp.uni-trier.de/search?q=Holden+Hall">Holden Hall</a>, <a href="https://dblp.uni-trier.de/search?q=Jeffery+Li">Jeffery Li</a></p>We prove NP-hardness and #P-hardness of Tetris clearing (clearing an initial
board using a given sequence of pieces) with the Super Rotation System (SRS),
even when the pieces are limited to any two of the seven Tetris piece types.
This result is the first advance on a question posed twenty years ago: which
piece sets are easy vs. hard? All previous Tetris NP-hardness proofs used five
of the seven piece types. We also prove ASP-completeness of Tetris clearing,
using three piece types, as well as versions of 3-Partition and Numerical
3-Dimensional Matching where all input integers are distinct. Finally, we prove
NP-hardness of Tetris survival and clearing under the "hard drops only" and
"20G" modes, using two piece types, improving on a previous "hard drops only"
result that used five piece types.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: The Simultaneous Interval Number: A New Width Parameter that Measures
the Similarity to Interval Graphshttp://arxiv.org/abs/2404.10670v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Jesse+Beisegel">Jesse Beisegel</a>, <a href="https://dblp.uni-trier.de/search?q=Nina+Chiarelli">Nina Chiarelli</a>, <a href="https://dblp.uni-trier.de/search?q=Ekkehard+K%C3%B6hler">Ekkehard Köhler</a>, <a href="https://dblp.uni-trier.de/search?q=Martin+Milani%C4%8D">Martin Milanič</a>, <a href="https://dblp.uni-trier.de/search?q=Peter+Mur%C5%A1i%C4%8D">Peter Muršič</a>, <a href="https://dblp.uni-trier.de/search?q=Robert+Scheffler">Robert Scheffler</a></p>We propose a novel way of generalizing the class of interval graphs, via a
graph width parameter called the simultaneous interval number. This parameter
is related to the simultaneous representation problem for interval graphs and
defined as the smallest number $d$ of labels such that the graph admits a
$d$-simultaneous interval representation, that is, an assignment of intervals
and label sets to the vertices such that two vertices are adjacent if and only
if the corresponding intervals, as well as their label sets, intersect. We show
that this parameter is $\mathsf{NP}$-hard to compute and give several bounds
for the parameter, showing in particular that it is sandwiched between
pathwidth and linear mim-width. For classes of graphs with bounded parameter
values, assuming that the graph is equipped with a simultaneous interval
representation with a constant number of labels, we give $\mathsf{FPT}$
algorithms for the clique, independent set, and dominating set problems, and
hardness results for the independent dominating set and coloring problems. The
$\mathsf{FPT}$ results for independent set and dominating set are for the
simultaneous interval number plus solution size. In contrast, both problems are
known to be $\mathsf{W}[1]$-hard for linear mim-width plus solution size.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: PSPACE-Hard 2D Super Mario Games: Thirteen Doorshttp://arxiv.org/abs/2404.10380v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=MIT+Hardness+Group">MIT Hardness Group</a>, <a href="https://dblp.uni-trier.de/search?q=Hayashi+Ani">Hayashi Ani</a>, <a href="https://dblp.uni-trier.de/search?q=Erik+D.+Demaine">Erik D. Demaine</a>, <a href="https://dblp.uni-trier.de/search?q=Holden+Hall">Holden Hall</a>, <a href="https://dblp.uni-trier.de/search?q=Matias+Korman">Matias Korman</a></p>We prove PSPACE-hardness for fifteen games in the Super Mario Bros. 2D
platforming video game series. Previously, only the original Super Mario Bros.
was known to be PSPACE-hard (FUN 2016), though several of the games we study
were known to be NP-hard (FUN 2014). Our reductions build door gadgets with
open, close, and traverse traversals, in each case using mechanics unique to
the game. While some of our door constructions are similar to those from FUN
2016, those for Super Mario Bros. 2, Super Mario Land 2, Super Mario World 2,
and the New Super Mario Bros. series are quite different; notably, the Super
Mario Bros. 2 door is extremely difficult. Doors remain elusive for just two 2D
Mario games (Super Mario Land and Super Mario Run); we prove that these games
are at least NP-hard.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Data Structures and Algorithms: A Fast 3-Approximation for the Capacitated Tree Cover Problem with Edge
Loadshttp://arxiv.org/abs/2404.10638v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Benjamin+Rockel-Wolff">Benjamin Rockel-Wolff</a></p>The capacitated tree cover problem with edge loads is a variant of the tree
cover problem, where we are given facility opening costs, edge costs and loads,
as well as vertex loads. We try to find a tree cover of minimum cost such that
the total edge and vertex load of each tree does not exceed a given bound. We
present an $\mathcal{O}(m\log n)$ time 3-approximation algorithm for this
problem.
This is achieved by starting with a certain LP formulation. We give a
combinatorial algorithm that solves the LP optimally in time $\mathcal{O}(m\log
n)$. Then, we show that a linear time rounding and splitting technique leads to
an integral solution that costs at most 3 times as much as the LP solution.
Finally, we prove that the integrality gap of the LP is $3$, which shows that
we can not improve the rounding step in general.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Simple $k$-crashing Plan with a Good Approximation Ratiohttp://arxiv.org/abs/2404.10514v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ruixi+Luo">Ruixi Luo</a>, <a href="https://dblp.uni-trier.de/search?q=Kai+Jin">Kai Jin</a>, <a href="https://dblp.uni-trier.de/search?q=Zelin+Ye">Zelin Ye</a></p>In project management, a project is typically described as an
activity-on-edge network (AOE network), where each activity / job is
represented as an edge of some network $N$ (which is a DAG). Some jobs must be
finished before others can be started, as described by the topology structure
of $N$. It is known that job $j_i$ in normal speed would require $b_i$ days to
be finished after it is started. Given the network $N$ with the associated edge
lengths $b_1,\ldots,b_m$, the duration of the project is determined, which
equals the length of the critical path (namely, the longest path) of $N$.
To speed up the project (i.e. reduce the duration), the manager can crash a
few jobs (namely, reduce the length of the corresponding edges) by investing
extra resources into that job. However, the time for completing $j_i$ has a
lower bound due to technological limits -- it requires at least $a_i$ days to
be completed. Moreover, it is expensive to buy resources. Given $N$ and an
integer $k\geq 1$, the $k$-crashing problem asks the minimum amount of
resources required to speed up the project by $k$ days. We show a simple and
efficient algorithm with an approximation ratio
$\frac{1}{1}+\ldots+\frac{1}{k}$ for this problem.
We also study a related problem called $k$-LIS, in which we are given a
sequence $\omega$ of numbers and we aim to find $k$ disjoint increasing
subsequence of $\omega$ with the largest total length. We show a
$(1-\frac{1}{e})$-approximation algorithm which is simple and efficient.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Subsequences With Generalised Gap Constraints: Upper and Lower
Complexity Boundshttp://arxiv.org/abs/2404.10497v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Florin+Manea">Florin Manea</a>, <a href="https://dblp.uni-trier.de/search?q=Jonas+Richardsen">Jonas Richardsen</a>, <a href="https://dblp.uni-trier.de/search?q=Markus+L.+Schmid">Markus L. Schmid</a></p>For two strings u, v over some alphabet A, we investigate the problem of
embedding u into w as a subsequence under the presence of generalised gap
constraints. A generalised gap constraint is a triple (i, j, C_{i, j}), where 1
<= i < j <= |u| and C_{i, j} is a subset of A^*. Embedding u as a subsequence
into v such that (i, j, C_{i, j}) is satisfied means that if u[i] and u[j] are
mapped to v[k] and v[l], respectively, then the induced gap v[k + 1..l - 1]
must be a string from C_{i, j}. This generalises the setting recently
investigated in [Day et al., ISAAC 2022], where only gap constraints of the
form C_{i, i + 1} are considered, as well as the setting from [Kosche et al.,
RP 2022], where only gap constraints of the form C_{1, |u|} are considered.
We show that subsequence matching under generalised gap constraints is
NP-hard, and we complement this general lower bound with a thorough
(parameterised) complexity analysis. Moreover, we identify several efficiently
solvable subclasses that result from restricting the interval structure induced
by the generalised gap constraints.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: How quickly can you pack short paths? Engineering a search-tree
algorithm for disjoint s-t paths of bounded lengthhttp://arxiv.org/abs/2404.10469v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Michael+Kiran+Huber">Michael Kiran Huber</a></p>We study the Short Path Packing problem which asks, given a graph $G$,
integers $k$ and $\ell$, and vertices $s$ and $t$, whether there exist $k$
pairwise internally vertex-disjoint $s$-$t$ paths of length at most $\ell$. The
problem has been proven to be NP-hard and fixed-parameter tractable
parameterized by $k$ and $\ell$. Most previous research on this problem has
been theoretical with limited practical implemetations. We present an exact
FPT-algorithm based on a search-tree approach in combination with greedy
localization. While its worst case runtime complexity of $(k\cdot
\ell^2)^{k\cdot \ell}\cdot n^{O(1)}$ is larger than the state of the art, the
nature of search-tree algorithms allows for a broad range of potential
optimizations. We exploit this potential by presenting techniques for input
preprocessing, early detection of trivial and infeasible instances, and
strategic selection of promising subproblems. Those approaches were implemented
and heavily tested on a large dataset of diverse graphs. The results show that
our heuristic improvements are very effective and that for the majority of
instances, we can achieve fast runtimes.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Bit catastrophes for the Burrows-Wheeler Transformhttp://arxiv.org/abs/2404.10426v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Sara+Giuliani">Sara Giuliani</a>, <a href="https://dblp.uni-trier.de/search?q=Shunsuke+Inenaga">Shunsuke Inenaga</a>, <a href="https://dblp.uni-trier.de/search?q=Zsuzsanna+Lipt%C3%A1k">Zsuzsanna Lipták</a>, <a href="https://dblp.uni-trier.de/search?q=Giuseppe+Romana">Giuseppe Romana</a>, <a href="https://dblp.uni-trier.de/search?q=Marinella+Sciortino">Marinella Sciortino</a>, <a href="https://dblp.uni-trier.de/search?q=Cristian+Urbina">Cristian Urbina</a></p>A bit catastrophe, loosely defined, is when a change in just one character of
a string causes a significant change in the size of the compressed string. We
study this phenomenon for the Burrows-Wheeler Transform (BWT), a string
transform at the heart of several of the most popular compressors and aligners
today. The parameter determining the size of the compressed data is the number
of equal-letter runs of the BWT, commonly denoted $r$.
We exhibit infinite families of strings in which insertion, deletion, resp.
substitution of one character increases $r$ from constant to $\Theta(\log n)$,
where $n$ is the length of the string. These strings can be interpreted both as
examples for an increase by a multiplicative or an additive $\Theta(\log
n)$-factor. As regards multiplicative factor, they attain the upper bound given
by Akagi, Funakoshi, and Inenaga [Inf & Comput. 2023] of $O(\log n \log r)$,
since here $r=O(1)$.
We then give examples of strings in which insertion, deletion, resp.
substitution of a character increases $r$ by a $\Theta(\sqrt{n})$ additive
factor. These strings significantly improve the best known lower bound for an
additive factor of $\Omega(\log n)$ [Giuliani et al., SOFSEM 2021].</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Private Vector Mean Estimation in the Shuffle Model: Optimal Rates
Require Many Messageshttp://arxiv.org/abs/2404.10201v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Hilal+Asi">Hilal Asi</a>, <a href="https://dblp.uni-trier.de/search?q=Vitaly+Feldman">Vitaly Feldman</a>, <a href="https://dblp.uni-trier.de/search?q=Jelani+Nelson">Jelani Nelson</a>, <a href="https://dblp.uni-trier.de/search?q=Huy+L.+Nguyen">Huy L. Nguyen</a>, <a href="https://dblp.uni-trier.de/search?q=Samson+Zhou">Samson Zhou</a>, <a href="https://dblp.uni-trier.de/search?q=Kunal+Talwar">Kunal Talwar</a></p>We study the problem of private vector mean estimation in the shuffle model
of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.
We propose a new multi-message protocol that achieves the optimal error using
$\tilde{\mathcal{O}}\left(\min(n\varepsilon^2,d)\right)$ messages per user.
Moreover, we show that any (unbiased) protocol that achieves optimal error
requires each user to send $\Omega(\min(n\varepsilon^2,d)/\log(n))$ messages,
demonstrating the optimality of our message complexity up to logarithmic
factors. Additionally, we study the single-message setting and design a
protocol that achieves mean squared error
$\mathcal{O}(dn^{d/(d+2)}\varepsilon^{-4/(d+2)})$. Moreover, we show that any
single-message protocol must incur mean squared error $\Omega(dn^{d/(d+2)})$,
showing that our protocol is optimal in the standard setting where $\varepsilon
= \Theta(1)$. Finally, we study robustness to malicious users and show that
malicious users can incur large additive error with a single shuffler.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Node Similarities under Random Projections: Limits and Pathological
Caseshttp://arxiv.org/abs/2404.10148v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Tvrtko+Tadi%C4%87">Tvrtko Tadić</a>, <a href="https://dblp.uni-trier.de/search?q=Cassiano+Becker">Cassiano Becker</a>, <a href="https://dblp.uni-trier.de/search?q=Jennifer+Neville">Jennifer Neville</a></p>Random Projections have been widely used to generate embeddings for various
graph tasks due to their computational efficiency. The majority of applications
have been justified through the Johnson-Lindenstrauss Lemma. In this paper, we
take a step further and investigate how well dot product and cosine similarity
are preserved by Random Projections. Our analysis provides new theoretical
results, identifies pathological cases, and tests them with numerical
experiments. We find that, for nodes of lower or higher degrees, the method
produces especially unreliable embeddings for the dot product, regardless of
whether the adjacency or the (normalized version) transition is used. With
respect to the statistical noise introduced by Random Projections, we show that
cosine similarity produces remarkably more precise approximations.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Synthetic Census Data Generation via Multidimensional Multiset Sumhttp://arxiv.org/abs/2404.10095v12024-04-17T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Cynthia+Dwork">Cynthia Dwork</a>, <a href="https://dblp.uni-trier.de/search?q=Kristjan+Greenewald">Kristjan Greenewald</a>, <a href="https://dblp.uni-trier.de/search?q=Manish+Raghavan">Manish Raghavan</a></p>The US Decennial Census provides valuable data for both research and policy
purposes. Census data are subject to a variety of disclosure avoidance
techniques prior to release in order to preserve respondent confidentiality.
While many are interested in studying the impacts of disclosure avoidance
methods on downstream analyses, particularly with the introduction of
differential privacy in the 2020 Decennial Census, these efforts are limited by
a critical lack of data: The underlying "microdata," which serve as necessary
input to disclosure avoidance methods, are kept confidential.
In this work, we aim to address this limitation by providing tools to
generate synthetic microdata solely from published Census statistics, which can
then be used as input to any number of disclosure avoidance algorithms for the
sake of evaluation and carrying out comparisons. We define a principled
distribution over microdata given published Census statistics and design
algorithms to sample from this distribution. We formulate synthetic data
generation in this context as a knapsack-style combinatorial optimization
problem and develop novel algorithms for this setting. While the problem we
study is provably hard, we show empirically that our methods work well in
practice, and we offer theoretical arguments to explain our performance.
Finally, we verify that the data we produce are "close" to the desired ground
truth.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentScott Aaronson: That IACR preprinthttps://scottaaronson.blog/?p=79462024-04-16T17:12:46+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<figure class="wp-block-image size-large"><img decoding="async" src="https://www.scottaaronson.com/eprint.jpg" alt=""></figure>
<p>For those who don’t yet know from their other social media: a week ago the cryptographer <a href="http://www.chenyilei.net/">Yilei Chen</a> posted a preprint, <a href="https://eprint.iacr.org/2024/555">eprint.iacr.org/2024/555</a>, claiming to give a polynomial-time quantum algorithm to solve lattice problems. For example, it claims to solve the <a href="https://en.wikipedia.org/wiki/Lattice_problem">GapSVP</a> problem, which asks to approximate the length of the shortest nonzero vector in a given n-dimensional lattice, to within an approximation ratio of ~n<sup>4.5</sup>. The best approximation ratio previously known to be achievable in classical or quantum polynomial time was exponential in n.</p>
<p>If it’s correct, this is an <em>extremely</em> big deal. It doesn’t quite break the main <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography">lattice-based cryptosystems</a>, but it would put those cryptosystems into a precarious position, vulnerable to a mere further polynomial improvement in the approximation factor. And, as we learned from the recent <a href="https://csrc.nist.gov/projects/post-quantum-cryptography">NIST competition</a>, if the lattice-based and LWE-based systems were to fall, then we really don’t have many great candidates left for post-quantum public-key cryptography! On top of that, a full quantum break of LWE (which, again, Chen is <em>not</em> claiming) would lay waste (in a world with scalable QCs, of course) to a large fraction of the beautiful sandcastles that classical and quantum cryptographers have built up over the last couple decades—everything from <a href="https://en.wikipedia.org/wiki/Homomorphic_encryption">Fully Homomorphic Encryption</a> schemes, to <a href="https://arxiv.org/abs/1804.01082">Mahadev’s protocol</a> for proving the output of any quantum computation to a classical skeptic.</p>
<p>So on the one hand, this would substantially enlarge the scope of exponential quantum speedups beyond what we knew a week ago: yet more reason to try to build scalable QCs! But on the other hand, it could also fuel an argument for coordinating to <em>slow down</em> the race to scalable fault-tolerant QCs, until the world can get its cryptographic house into better order. (Of course, as we’ve seen with the many proposals to slow down AI scaling, this might or might not be possible.)</p>
<p>So then, is the paper correct? <strong>I don’t know.</strong> It’s very obviously a serious effort by a serious researcher, a world away from the P=NP proofs that fill my inbox every day. But it might fail anyway. I’ve asked the world experts in quantum algorithms for lattice problems, and they’ve been looking at it, and none of them is ready yet to render a verdict. The central difficulty is that the algorithm is convoluted, and involves new tools that seem to come from left field, including complex Gaussian functions, the windowed quantum Fourier transform, and Karst waves (whatever those are). The algorithm has 9 phases by the author’s count. In my own perusal, I haven’t yet extracted even a high-level intuition—I can’t tell any little story like for Shor’s algorithm, e.g. “first you reduce factoring to period-finding, then you solve period-finding by applying a Fourier transform to a vector of amplitudes.”</p>
<p>So, the main purpose of this post is simply to throw things open to commenters! I’m happy to provide a public clearinghouse for questions and comments about the preprint, if those studying it would like that. You can even embed LaTeX in your comments, as will probably be needed to get anywhere.</p>
<hr class="wp-block-separator has-alpha-channel-opacity">
<p><strong><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Unrelated Update:</mark></strong> Connor Tabarrok and his friends just put a <a href="https://www.youtube.com/watch?v=EK2zEyFyYXM">podcast with me up on YouTube</a>, in which they interview me in my office at UT Austin about watermarking of large language models and other AI safety measures.</p>
<p class="authors">By Scott</p>
</body></html>
Scott Aaronsonhttps://scottaaronson.blogarXiv: Computational Complexity: On the complexity of some cycle convexity parametershttp://arxiv.org/abs/2404.09236v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Carlos+V.+G.+C.+Lima">Carlos V. G. C. Lima</a>, <a href="https://dblp.uni-trier.de/search?q=Thiago+Marcilon">Thiago Marcilon</a>, <a href="https://dblp.uni-trier.de/search?q=Pedro+Paulo+de+Medeiros">Pedro Paulo de Medeiros</a></p>The subject of graph convexity is well explored in the literature, the
so-called interval convexities above all. In this work, we explore the cycle
convexity, whose interval function is $I(S) = S \cup \{u \mid G[S \cup \{u\}]$
has a cycle containing $u\}$. In this convexity, we prove that the decision
problems associated to the parameters rank and convexity number are in
\NP-complete and \W[1]-hard when parameterized by the solution size. We also
prove that to determine whether the percolation time of a graph is at least $k$
is \NP-complete, but polynomial for cacti or when $k\leq2$</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Almost Optimal Time Lower Bound for Approximating Parameterized Clique,
CSP, and More, under ETHhttp://arxiv.org/abs/2404.08870v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Venkatesan+Guruswami">Venkatesan Guruswami</a>, <a href="https://dblp.uni-trier.de/search?q=Bingkai+Lin">Bingkai Lin</a>, <a href="https://dblp.uni-trier.de/search?q=Xuandi+Ren">Xuandi Ren</a>, <a href="https://dblp.uni-trier.de/search?q=Yican+Sun">Yican Sun</a>, <a href="https://dblp.uni-trier.de/search?q=Kewen+Wu">Kewen Wu</a></p>The Parameterized Inapproximability Hypothesis (PIH), which is an analog of
the PCP theorem in parameterized complexity, asserts that, there is a constant
$\varepsilon> 0$ such that for any computable function
$f:\mathbb{N}\to\mathbb{N}$, no $f(k)\cdot n^{O(1)}$-time algorithm can, on
input a $k$-variable CSP instance with domain size $n$, find an assignment
satisfying $1-\varepsilon$ fraction of the constraints. A recent work by
Guruswami, Lin, Ren, Sun, and Wu (STOC'24) established PIH under the
Exponential Time Hypothesis (ETH).
In this work, we improve the quantitative aspects of PIH and prove (under
ETH) that approximating sparse parameterized CSPs within a constant factor
requires $n^{k^{1-o(1)}}$ time. This immediately implies that, assuming ETH,
finding a $(k/2)$-clique in an $n$-vertex graph with a $k$-clique requires
$n^{k^{1-o(1)}}$ time. We also prove almost optimal time lower bounds for
approximating $k$-ExactCover and Max $k$-Coverage.
Our proof follows the blueprint of the previous work to identify a
"vector-structured" ETH-hard CSP whose satisfiability can be checked via an
appropriate form of "parallel" PCP. Using further ideas in the reduction, we
guarantee additional structures for constraints in the CSP. We then leverage
this to design a parallel PCP of almost linear size based on Reed-Muller codes
and derandomized low degree testing.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: The Illusion of State in State-Space Modelshttp://arxiv.org/abs/2404.08819v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=William+Merrill">William Merrill</a>, <a href="https://dblp.uni-trier.de/search?q=Jackson+Petty">Jackson Petty</a>, <a href="https://dblp.uni-trier.de/search?q=Ashish+Sabharwal">Ashish Sabharwal</a></p>State-space models (SSMs) have emerged as a potential alternative
architecture for building large language models (LLMs) compared to the
previously ubiquitous transformer architecture. One theoretical weakness of
transformers is that they cannot express certain kinds of sequential
computation and state tracking (Merrill and Sabharwal, 2023), which SSMs are
explicitly designed to address via their close architectural similarity to
recurrent neural networks (RNNs). But do SSMs truly have an advantage (over
transformers) in expressive power for state tracking? Surprisingly, the answer
is no. Our analysis reveals that the expressive power of SSMs is limited very
similarly to transformers: SSMs cannot express computation outside the
complexity class $\mathsf{TC}^0$. In particular, this means they cannot solve
simple state-tracking problems like permutation composition. It follows that
SSMs are provably unable to accurately track chess moves with certain notation,
evaluate code, or track entities in a long narrative. To supplement our formal
analysis, we report experiments showing that Mamba-style SSMs indeed struggle
with state tracking. Thus, despite its recurrent formulation, the "state" in an
SSM is an illusion: SSMs have similar expressiveness limitations to
non-recurrent models like transformers, which may fundamentally limit their
ability to solve real-world state-tracking problems.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: The Fine-Grained Complexity of Graph Homomorphism Problems: Towards the
Okrasa and Rzążewski Conjecturehttp://arxiv.org/abs/2404.09798v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ambroise+Baril">Ambroise Baril</a>, <a href="https://dblp.uni-trier.de/search?q=Miguel+Couceiro">Miguel Couceiro</a>, <a href="https://dblp.uni-trier.de/search?q=Victor+Lagerkvist">Victor Lagerkvist</a></p>In this paper we are interested in the fine-grained complexity of deciding
whether there is a homomorphism from an input graph $G$ to a fixed graph $H$
(the $H$-Coloring problem). The starting point is that these problems can be
viewed as constraint satisfaction problems (CSPs), and that (partial)
polymorphisms of binary relations are of paramount importance in the study of
complexity classes of such CSPs.
Thus, we first investigate the expressivity of binary symmetric relations
$E_H$ and their corresponding (partial) polymorphisms pPol($E_H$). For
irreflexive graphs we observe that there is no pair of graphs $H$ and $H'$ such
that pPol($E_H$) $\subseteq$ pPol($E_{H'}$), unless $E_{H'}= \emptyset$ or $H
=H'$. More generally we show the existence of an $n$-ary relation $R$ whose
partial polymorphisms strictly subsume those of $H$ and such that CSP($R$) is
NP-complete if and only if $H$ contains an odd cycle of length at most $n$.
Motivated by this we also describe the sets of total polymorphisms of
nontrivial cliques, odd cycles, as well as certain cores, and we give an
algebraic characterization of projective cores. As a by-product, we settle the
Okrasa and Rz\k{a}\.zewski conjecture for all graphs of at most 7 vertices.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Complexity: Kernelization Algorithms for the Eigenvalue Deletion Problemshttp://arxiv.org/abs/2404.10023v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Ajinkya+Gaikwad">Ajinkya Gaikwad</a>, <a href="https://dblp.uni-trier.de/search?q=Hitendra+Kumar">Hitendra Kumar</a>, <a href="https://dblp.uni-trier.de/search?q=Soumen+Maity">Soumen Maity</a></p>Given a graph $G=(V,E)$ and an integer $k\in \mathbb{N}$, we study {\sc
2-Eigenvalue Vertex Deletion} (2-EVD), where the goal is to remove at most $k$
vertices such that the adjacency matrix of the resulting graph has at most 2
eigenvalues. It is known that the adjacency matrix of a graph has at most 2
eigenvalues if and only if the graph is a collection of equal sized cliques. So
{\sc 2-Eigenvalue Vertex Deletion} amounts to removing a set of at most $k$
vertices such that the resulting graph is a collection of equal sized cliques.
The {\sc 2-Eigenvalue Edge Editing} (2-EEE), {\sc 2-Eigenvalue Edge Deletion}
(2-EED) and {\sc 2-Eigenvalue Edge Addition} (2-EEA) problems are defined
analogously. We provide a kernel of size $\mathcal{O}(k^{3})$ for {\sc
$2$-EVD}. For the problems {\sc $2$-EEE} and {\sc $2$-EED}, we provide kernels
of size $\mathcal{O}(k^{2})$. Finally, we provide a linear kernel of size $6k$
for {\sc $2$-EEA}. We thereby resolve three open questions listed by Misra et
al. (ISAAC 2023) concerning the complexity of these problems parameterized by
the solution size.</body></html>
arXiv: Computational Complexityhttps://arxiv.org/list/cs.CC/recentarXiv: Computational Geometry: Reconstructing Curves from Sparse Samples on Riemannian Manifoldshttp://arxiv.org/abs/2404.09661v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Diana+Marin">Diana Marin</a>, <a href="https://dblp.uni-trier.de/search?q=Filippo+Maggioli">Filippo Maggioli</a>, <a href="https://dblp.uni-trier.de/search?q=Simone+Melzi">Simone Melzi</a>, <a href="https://dblp.uni-trier.de/search?q=Stefan+Ohrhallinger">Stefan Ohrhallinger</a>, <a href="https://dblp.uni-trier.de/search?q=Michael+Wimmer">Michael Wimmer</a></p>Reconstructing 2D curves from sample points has long been a critical
challenge in computer graphics, finding essential applications in vector
graphics. The design and editing of curves on surfaces has only recently begun
to receive attention, primarily relying on human assistance, and where not,
limited by very strict sampling conditions. In this work, we formally improve
on the state-of-the-art requirements and introduce an innovative algorithm
capable of reconstructing closed curves directly on surfaces from a given
sparse set of sample points. We extend and adapt a state-of-the-art planar
curve reconstruction method to the realm of surfaces while dealing with the
challenges arising from working on non-Euclidean domains. We demonstrate the
robustness of our method by reconstructing multiple curves on various surface
meshes. We explore novel potential applications of our approach, allowing for
automated reconstruction of curves on Riemannian manifolds.</body></html>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Wasserstein Wormhole: Scalable Optimal Transport Distance with
Transformershttp://arxiv.org/abs/2404.09411v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Doron+Haviv">Doron Haviv</a>, <a href="https://dblp.uni-trier.de/search?q=Russell+Zhang+Kunes">Russell Zhang Kunes</a>, <a href="https://dblp.uni-trier.de/search?q=Thomas+Dougherty">Thomas Dougherty</a>, <a href="https://dblp.uni-trier.de/search?q=Cassandra+Burdziak">Cassandra Burdziak</a>, <a href="https://dblp.uni-trier.de/search?q=Tal+Nawy">Tal Nawy</a>, <a href="https://dblp.uni-trier.de/search?q=Anna+Gilbert">Anna Gilbert</a>, <a href="https://dblp.uni-trier.de/search?q=Dana+Pe%27er">Dana Pe'er</a></p>Optimal transport (OT) and the related Wasserstein metric (W) are powerful
and ubiquitous tools for comparing distributions. However, computing pairwise
Wasserstein distances rapidly becomes intractable as cohort size grows. An
attractive alternative would be to find an embedding space in which pairwise
Euclidean distances map to OT distances, akin to standard multidimensional
scaling (MDS). We present Wasserstein Wormhole, a transformer-based autoencoder
that embeds empirical distributions into a latent space wherein Euclidean
distances approximate OT distances. Extending MDS theory, we show that our
objective function implies a bound on the error incurred when embedding
non-Euclidean distances. Empirically, distances between Wormhole embeddings
closely match Wasserstein distances, enabling linear time computation of OT
distances. Along with an encoder that maps distributions to embeddings,
Wasserstein Wormhole includes a decoder that maps embeddings back to
distributions, allowing for operations in the embedding space to generalize to
OT spaces, such as Wasserstein barycenter estimation and OT interpolation. By
lending scalability and interpretability to OT approaches, Wasserstein Wormhole
unlocks new avenues for data analysis in the fields of computational geometry
and single-cell biology.</body></html>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Computing distances and means on manifolds with a metric-constrained
Eikonal approachhttp://arxiv.org/abs/2404.08754v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Daniel+Kelshaw">Daniel Kelshaw</a>, <a href="https://dblp.uni-trier.de/search?q=Luca+Magri">Luca Magri</a></p>Computing distances on Riemannian manifolds is a challenging problem with
numerous applications, from physics, through statistics, to machine learning.
In this paper, we introduce the metric-constrained Eikonal solver to obtain
continuous, differentiable representations of distance functions on manifolds.
The differentiable nature of these representations allows for the direct
computation of globally length-minimising paths on the manifold. We showcase
the use of metric-constrained Eikonal solvers for a range of manifolds and
demonstrate the applications. First, we demonstrate that metric-constrained
Eikonal solvers can be used to obtain the Fr\'echet mean on a manifold,
employing the definition of a Gaussian mixture model, which has an analytical
solution to verify the numerical results. Second, we demonstrate how the
obtained distance function can be used to conduct unsupervised clustering on
the manifold -- a task for which existing approaches are computationally
prohibitive. This work opens opportunities for distance computations on
manifolds.</body></html>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Hardness of Packing, Covering and Partitioning Simple Polygons with Unit
Squareshttp://arxiv.org/abs/2404.09835v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Jack+Stade">Jack Stade</a>, <a href="https://dblp.uni-trier.de/search?q=Mikkel+Abrahamsen">Mikkel Abrahamsen</a></p>We show that packing axis-aligned unit squares into a simple polygon $P$ is
NP-hard, even when $P$ is an orthogonal and orthogonally convex polygon with
half-integer coordinates. It has been known since the early 80s that packing
unit squares into a polygon with holes is NP-hard~[Fowler, Paterson, Tanimoto,
Inf. Process. Lett., 1981], but the version without holes was conjectured to be
polynomial-time solvable more than two decades ago~[Baur and Fekete,
Algorithmica, 2001].
Our reduction relies on a new way of reducing from \textsc{Planar-3SAT}.
Interestingly, our geometric realization of a planar formula is non-planar.
Vertices become rows and edges become columns, with crossings being allowed.
The planarity ensures that all endpoints of rows and columns are incident to
the outer face of the resulting drawing. We can then construct a polygon
following the outer face that realizes all the logic of the formula
geometrically, without the need of any holes.
This new reduction technique proves to be general enough to also show
hardness of two natural covering and partitioning problems, even when the input
polygon is simple. We say that a polygon $Q$ is \emph{small} if $Q$ is
contained in a unit square. We prove that it is NP-hard to find a minimum
number of small polygons whose union is $P$ (covering) and to find a minimum
number of pairwise interior-disjoint small polygons whose union is $P$
(partitioning), when $P$ is an orthogonal simple polygon with half-integer
coordinates. This is the first partitioning problem known to be NP-hard for
polygons without holes, with the usual objective of minimizing the number of
pieces.</body></html>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Computational Geometry: Eliminating Crossings in Ordered Graphshttp://arxiv.org/abs/2404.09771v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Akanksha+Agrawal">Akanksha Agrawal</a>, <a href="https://dblp.uni-trier.de/search?q=Sergio+Cabello">Sergio Cabello</a>, <a href="https://dblp.uni-trier.de/search?q=Michael+Kaufmann">Michael Kaufmann</a>, <a href="https://dblp.uni-trier.de/search?q=Saket+Saurabh">Saket Saurabh</a>, <a href="https://dblp.uni-trier.de/search?q=Roohani+Sharma">Roohani Sharma</a>, <a href="https://dblp.uni-trier.de/search?q=Yushi+Uno">Yushi Uno</a>, <a href="https://dblp.uni-trier.de/search?q=Alexander+Wolff">Alexander Wolff</a></p>Drawing a graph in the plane with as few crossings as possible is one of the
central problems in graph drawing and computational geometry. Another option is
to remove the smallest number of vertices or edges such that the remaining
graph can be drawn without crossings. We study both problems in a
book-embedding setting for ordered graphs, that is, graphs with a fixed vertex
order. In this setting, the vertices lie on a straight line, called the spine,
in the given order, and each edge must be drawn on one of several pages of a
book such that every edge has at most a fixed number of crossings. In book
embeddings, there is another way to reduce or avoid crossings; namely by using
more pages. The minimum number of pages needed to draw an ordered graph without
any crossings is its (fixed-vertex-order) page number.
We show that the page number of an ordered graph with $n$ vertices and $m$
edges can be computed in $2^m \cdot n^{O(1)}$ time. An $O(\log
n)$-approximation of this number can be computed efficiently. We can decide in
$2^{O(d \sqrt{k} \log (d+k))} \cdot n^{O(1)}$ time whether it suffices to
delete $k$ edges of an ordered graph to obtain a $d$-planar layout (where every
edge crosses at most $d$ other edges) on one page. As an additional parameter,
we consider the size $h$ of a hitting set, that is, a set of points on the
spine such that every edge, seen as an open interval, contains at least one of
the points. For $h=1$, we can efficiently compute the minimum number of edges
whose deletion yields fixed-vertex-order page number $p$. For $h>1$, we give an
XP algorithm with respect to $h+p$. Finally, we consider spine+$t$-track
drawings, where some but not all vertices lie on the spine. The vertex order on
the spine is given; we must map every vertex that does not lie on the spine to
one of $t$ tracks, each of which is a straight line on a separate page,
parallel to the spine.</body></html>
arXiv: Computational Geometryhttps://arxiv.org/list/cs.CG/recentarXiv: Data Structures and Algorithms: Online Multi-level Aggregation with Delays and Stochastic Arrivalshttp://arxiv.org/abs/2404.09711v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Mathieu+Mari">Mathieu Mari</a>, <a href="https://dblp.uni-trier.de/search?q=Micha%C5%82+Paw%C5%82owski">Michał Pawłowski</a>, <a href="https://dblp.uni-trier.de/search?q=Runtian+Ren">Runtian Ren</a>, <a href="https://dblp.uni-trier.de/search?q=Piotr+Sankowski">Piotr Sankowski</a></p>This paper presents a new research direction for online Multi-Level
Aggregation (MLA) with delays. In this problem, we are given an edge-weighted
rooted tree $T$, and we have to serve a sequence of requests arriving at its
vertices in an online manner. Each request $r$ is characterized by two
parameters: its arrival time $t(r)$ and location $l(r)$ (a vertex). Once a
request $r$ arrives, we can either serve it immediately or postpone this action
until any time $t > t(r)$. We can serve several pending requests at the same
time, and the service cost of a service corresponds to the weight of the
subtree that contains all the requests served and the root of $T$. Postponing
the service of a request $r$ to time $t > t(r)$ generates an additional delay
cost of $t - t(r)$. The goal is to serve all requests in an online manner such
that the total cost (i.e., the total sum of service and delay costs) is
minimized. The current best algorithm for this problem achieves a competitive
ratio of $O(d^2)$ (Azar and Touitou, FOCS'19), where $d$ denotes the depth of
the tree.
Here, we consider a stochastic version of MLA where the requests follow a
Poisson arrival process. We present a deterministic online algorithm which
achieves a constant ratio of expectations, meaning that the ratio between the
expected costs of the solution generated by our algorithm and the optimal
offline solution is bounded by a constant. Our algorithm is obtained by
carefully combining two strategies. In the first one, we plan periodic
oblivious visits to the subset of frequent vertices, whereas in the second one,
we greedily serve the pending requests in the remaining vertices. This problem
is complex enough to demonstrate a very rare phenomenon that ``single-minded"
or ``sample-average" strategies are not enough in stochastic optimization.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: A Circus of Circuits: Connections Between Decision Diagrams, Circuits,
and Automatahttp://arxiv.org/abs/2404.09674v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Antoine+Amarilli">Antoine Amarilli</a>, <a href="https://dblp.uni-trier.de/search?q=Marcelo+Arenas">Marcelo Arenas</a>, <a href="https://dblp.uni-trier.de/search?q=YooJung+Choi">YooJung Choi</a>, <a href="https://dblp.uni-trier.de/search?q=Mika%C3%ABl+Monet">Mikaël Monet</a>, <a href="https://dblp.uni-trier.de/search?q=Guy+Van+den+Broeck">Guy Van den Broeck</a>, <a href="https://dblp.uni-trier.de/search?q=Benjie+Wang">Benjie Wang</a></p>This document is an introduction to two related formalisms to define Boolean
functions: binary decision diagrams, and Boolean circuits. It presents these
formalisms and several of their variants studied in the setting of knowledge
compilation. Last, it explains how these formalisms can be connected to the
notions of automata over words and trees.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Better space-time-robustness trade-offs for set reconciliationhttp://arxiv.org/abs/2404.09607v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Djamal+Belazzougui">Djamal Belazzougui</a>, <a href="https://dblp.uni-trier.de/search?q=Gregory+Kucherov">Gregory Kucherov</a>, <a href="https://dblp.uni-trier.de/search?q=Stefan+Walzer">Stefan Walzer</a></p>We consider the problem of reconstructing the symmetric difference between
similar sets from their representations (sketches) of size linear in the number
of differences. Exact solutions to this problem are based on error-correcting
coding techniques and suffer from a large decoding time. Existing probabilistic
solutions based on Invertible Bloom Lookup Tables (IBLTs) are time-efficient
but offer insufficient success guarantees for many applications. Here we
propose a tunable trade-off between the two approaches combining the efficiency
of IBLTs with exponentially decreasing failure probability. The proof relies on
a refined analysis of IBLTs proposed in (Baek Tejs Houen et al. SOSA 2023)
which has an independent interest. We also propose a modification of our
algorithm that enables telling apart the elements of each set in the symmetric
difference.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Improved Approximations for Flexible Network Designhttp://arxiv.org/abs/2404.08972v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Dylan+Hyatt-Denesik">Dylan Hyatt-Denesik</a>, <a href="https://dblp.uni-trier.de/search?q=Afrouz+Jabal+Ameli">Afrouz Jabal Ameli</a>, <a href="https://dblp.uni-trier.de/search?q=Laura+Sanita">Laura Sanita</a></p>Flexible network design deals with building a network that guarantees some
connectivity requirements between its vertices, even when some of its elements
(like vertices or edges) fail. In particular, the set of edges (resp. vertices)
of a given graph are here partitioned into safe and unsafe. The goal is to
identify a minimum size subgraph that is 2-edge-connected (resp.
2-vertex-connected), and stay so whenever any of the unsafe elements gets
removed.
In this paper, we provide improved approximation algorithms for flexible
network design problems, considering both edge-connectivity and
vertex-connectivity, as well as connectivity values higher than 2. For the
vertex-connectivity variant, in particular, our algorithm is the first with
approximation factor strictly better than 2.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentarXiv: Data Structures and Algorithms: Search-Space Reduction Via Essential Vertices Revisited: Vertex Multicut
and Cograph Deletionhttp://arxiv.org/abs/2404.09769v12024-04-16T00:00:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Bart+M.+P.+Jansen">Bart M. P. Jansen</a>, <a href="https://dblp.uni-trier.de/search?q=Ruben+F.+A.+Verhaegh">Ruben F. A. Verhaegh</a></p>For an optimization problem $\Pi$ on graphs whose solutions are vertex sets,
a vertex $v$ is called $c$-essential for $\Pi$ if all solutions of size at most
$c \cdot OPT$ contain $v$. Recent work showed that polynomial-time algorithms
to detect $c$-essential vertices can be used to reduce the search space of
fixed-parameter tractable algorithms solving such problems parameterized by the
size $k$ of the solution. We provide several new upper- and lower bounds for
detecting essential vertices. For example, we give a polynomial-time algorithm
for $3$-Essential detection for Vertex Multicut, which translates into an
algorithm that finds a minimum multicut of an undirected $n$-vertex graph $G$
in time $2^{O(\ell^3)} \cdot n^{O(1)}$, where $\ell$ is the number of vertices
in an optimal solution that are not $3$-essential. Our positive results are
obtained by analyzing the integrality gaps of certain linear programs. Our
lower bounds show that for sufficiently small values of $c$, the detection task
becomes NP-hard assuming the Unique Games Conjecture. For example, we show that
($2-\varepsilon$)-Essential detection for Directed Feedback Vertex Set is
NP-hard under this conjecture, thereby proving that the existing algorithm that
detects $2$-essential vertices is best-possible.</body></html>
arXiv: Data Structures and Algorithmshttps://arxiv.org/list/cs.DS/recentDavid Eppstein: Linkagehttps://11011110.github.io/blog/2024/04/15/linkage2024-04-15T22:03:00+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<ul>
<li>
<p>Michael Mitzenmacher is making an unusual request for publicity for his NON-participation in a conference <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112193063689049863">\(\mathbb{M}\)</a>).</span> It calls itself ICAIM, the International Conference on Artificial Intelligence and Mathematics, to be held in Nanjing in September, and it falsely lists Mitzenmacher as a conference chair, Mike Goodrich as program chair, and Jelani Nelson as a program committee member, among others. None of Michael, Mike, and Jelani have agreed to allow their names to be used in this way. The conference contact email goes to a known spam/phishing site. It appears to be a scam. Be aware!</p>
</li>
<li>
<p>Here’s an example of why it’s important to base Wikipedia’s mathematics content on published sources rather than going by your intuition <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112204618068438793">\(\mathbb{M}\)</a>).</span> Wikipedia has an article on the <a href="https://en.wikipedia.org/wiki/Classification_of_manifolds">classification of manifolds</a>, topological spaces that look “locally” like Euclidean spaces, in the sense that each point has a neighborhood within which the topology is the same as a Euclidean space. From 2007 until very recently this article has said that there are only two connected 1-dimensional manifolds: the circle and the line. You can draw other curves, of course, but they will be topologically equivalent to one or the other of these two spaces. But this is wrong (for the naive definition above; often manifolds are restricted to have a countable open cover, to avoid this sort of example)! There are two more possibilities, the <a href="https://en.wikipedia.org/wiki/Long_line_(topology)">long line</a> obtained by gluing together an uncountable number of copies of a unit interval, and a line-like space obtained by gluing together an ordinary real ray and a long ray. See Frolík (1962), “<a href="https://hdl.handle.net/10338.dmlcz/142137">On the classification of 1-dimensional manifolds</a>”. You might think that just as there is more than one line, there might be more than one long line, obtained by gluing together sets of intervals of different cardinalities. But no, it stops here: apparently only the smallest uncountable ordinal works as a gluing pattern. Anything else would have a limit point where more than countably many unit intervals appear in every neighborhood, and that limit point is not locally Euclidean.</p>
</li>
<li>
<p>If you were using Google Chart to render LaTeX formulas in web pages, it’s long past time to change to something else <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@dpiponi/112202547763665745">\(\mathbb{M}\)</a>).</span> That’s been <a href="https://developers.google.com/chart/infographics/docs/formulas">deprecated so long ago that the deprecation expired in 2015</a>, and it seems it finally stopped working.</p>
</li>
<li>
<p><a href="https://cameroncounts.wordpress.com/2024/04/03/west-virginia-university/">West Virginia University, the main public university in its state, shuts down pure mathematics as a research topic</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112215994119328438">\(\mathbb{M}\)</a>).</span> See also <em>Inside Higher Education</em>, “<a href="https://www.insidehighered.com/news/institutions/research-universities/2023/08/31/wvu-r-1-flagshipwithout-language-degrees-or-math">A Flagship Research University… Without Language Degrees or Math Ph.D.s?</a>” (<a href="https://archive.is/Xg22j">archive</a>) and <em>The New Yorker</em>, “<a href="https://www.newyorker.com/news/us-journal/an-academic-transformation-takes-on-the-math-department">An ‘academic transformation’ takes on the math department</a>”. I’m not in a mathematics department, but I completely agree with the sentiment of an operations researcher quoted by Peter Cameron in the comments of his post: “I could not hold up my head if I were in a university with no mathematics department”.</p>
</li>
<li>
<p>A very easy union bound in probability <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112220860205946965">\(\mathbb{M}\)</a>)</span> states that, for a collection of independent events whose expected number (the sum of the probabilities of the events) is \(\mu\), the probability \(p\) of the union (that is, the probability of getting at least one event) obeys \(p\ge 1-e^{-\mu}\). When you expect to see many events, even if each one is individually unlikely, the probability of seeing at least one is high. This is a lower bound, unlike the more famous “union bound” \(p\le\mu\) (which doesn’t assume independence), and makes sense even when \(\mu\) is large. See e.g. <a href="https://math.stackexchange.com/questions/2634479/a-lower-bound-for-the-sum-of-probabilities-of-independent-events">this stackexchange post asking how to prove it</a>. I have a different question: what is this lower bound on the union called? I looked through <a href="https://en.wikipedia.org/wiki/Category:Probabilistic_inequalities">the probability inequalities listed by Wikipedia</a> and didn’t see it, but perhaps I missed something obvious.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@SusiS@pixelfed.de/112176747497292218">Edited view of the Salvator Dali Museum spiral makes it even trippier</a>.</p>
</li>
<li>
<p><a href="https://www.quantamagazine.org/merging-fields-mathematicians-go-the-distance-on-old-problem-20240401/"><em>Quanta</em> on integer distances</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112232626956795824">\(\mathbb{M}\)</a>).</span> The linked article highlights new research of Rachel Greenfeld, Sarah Peluse, and Marina Iliopoulou (<a href="https://arxiv.org/abs/2401.10821">arXiv:2401.10821</a>), on sets of planar points with integer distances. A paper of Solymosi shows that any such set has size at most linear in the diameter, achieved by evenly spaced points on the line. There also exist arbitrarily large cocircular integer-distance sets. The new preprint shows that everything else must be small: if you remove the largest point or circle from an integer-distance set, the number of points remaining is polylogarithmic in the diameter. Don’t be confused (as I initially was) by the restriction that the points lie in \([-N,N]^2\): they can have any real coordinate in that range, not just integers. The range limit \(N\) is just a stand-in for the diameter.</p>
<p>This is all motivated by the <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Anning_theorem">Erdős-Anning theorem</a>: if an integer-distance set does not lie entirely on one line, it must be finite. See also <a href="https://11011110.github.io/blog/2024/01/14/integer-distances-floppy.html">my own recent work extending the Erdős-Anning theorem and Solymosi’s diameter bounds in a different direction to various non-Euclidean but two-dimensional spaces</a>, <a href="https://arxiv.org/abs/2401.06328">arXiv:2401.06328</a>. A paragraph of the Greenfeld-Peluse-Iliopoulou preprint at the end of section 1.1, claiming that all prior work uses low degree algebraic geometry and because of that could not improve on Solymosi’s work, is somewhat inaccurate. My preprint is prior to theirs, does move beyond Solymosi (in a different direction), and explicitly avoids any algebraic geometry in favor of a topological analysis of Voronoi diagrams that works in other spaces.</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2404.04552">Fast and simple sorting using partial information (arXiv:2404.04552)</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112238766104718448">\(\mathbb{M}\)</a>),</span> Bernhard Haeupler, Richard Hladík, John Iacono, Vaclav Rozhon, Robert Tarjan, and Jakub Tětek. Tarjan gave a distinguished seminar on this at my department recently. The result is that if you are given the outcomes of some comparisons on a totally ordered set, as a directed acyclic graph with \(n\) vertices, \(m\) edges, and \(T\) topological orders (linear extensions), you can do more comparisons to complete the sorting proces in time \(O(n+m+\log T)\) using \(O(\log T)\) comparisons. The main ideas include:</p>
<ul>
<li>
<p>Combining a greedy topological ordering algorithm with a priority queue on available items to pick out the smallest available item at each step.</p>
</li>
<li>
<p>Using a priority queue with the “working set property”: the time to insert or delete an element \(x\) is logarithmic in the maximum number of other elements that were inserted after \(x\) and active at the same time as each other and \(x\).</p>
</li>
<li>
<p>Proving the working set property for <a href="https://en.wikipedia.org/wiki/Pairing_heap">pairing heaps</a>, a simple priority queue.</p>
</li>
<li>
<p>Using a clique decomposition of an interval graph, formed from the lifetimes of elements in the priority queue, to show that the working set bound on the priority queue almost matches the claimed bound on comparisons in terms of \(T\).</p>
</li>
<li>
<p>Some special case handling of input DAGs with long paths to fix up the cases where the two bounds don’t quite match</p>
</li>
</ul>
</li>
<li>
<p><a href="https://arxiv.org/abs/2403.17370">Formal verification of the empty hexagon number (arXiv:2403.17370)</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@highergeometer/112167562740222063">\(\mathbb{M}\)</a>),</span> Bernardo Subercaseaux, Wojciech Nawrocki, James Gallicchio, Cayden Codel, Mario Carneiro, and Marijn J. H. Heule. Heule and Scheucher proved earlier using a SAT solver that sets of 30 or more points in the plane in general position always have an empty hexagon; this is tight as sets of 29 points with no empty hexagon are also known. This new preprint uses Lean to prove the “by hand” part, but the authors write that more needs to be done to connect formal provers to the automatic parts of SAT solver results.</p>
</li>
<li>
<p><a href="https://awards.acm.org/award-recipients/wigderson_3844537">Avi Wigderson is this year’s Turing Award winner</a> <span style="white-space:nowrap">(<a href="https://fediscience.org/@fortnow/112247200796712931">\(\mathbb{M}\)</a>,</span> <a href="https://blog.computationalcomplexity.org/2024/04/avi-wins-turing-award.html">see also</a>).</p>
</li>
<li>
<p>Two more newly-promoted Good Articles on Wikipedia <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112253912393955221">\(\mathbb{M}\)</a>):</span></p>
<ul>
<li>
<p><a href="https://en.wikipedia.org/wiki/Random_binary_tree">Random binary tree</a> – not just binary search trees produced through random insertion orders (although those are a central topic) but also uniformly random trees, trees produced through stochastic branching processes, and digital search trees for random binary numbers. These different distributions give different properties: for instance, random-insertion trees have logarithmic depth but for uniformly random trees it is proportional to the square root. For certain branching processes, even the expected number of nodes is unbounded.</p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/Earth%E2%80%93Moon_problem">Earth–Moon problem</a> – how many colors would you need to color a political map in a future in which each Earth nation has a contiguous Moon colony? Unlike the situation with the four-color theorem, the answer remains unknown. Beyond sci-fi cartography, there are some applications in printed-circuit board testing.</p>
</li>
</ul>
</li>
<li>
<p><a href="https://news.artnet.com/art-world/florence-gallerie-dellaccademia-wins-david-lawsuit-2313262">The Italian court system continues its attack on the public domain</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112265657744993998">\(\mathbb{M}\)</a>,</span> <a href="https://news.ycombinator.com/item?id=40023471">via</a>, <a href="https://www.latimes.com/world-nation/story/2024-04-01/a-fight-to-protect-the-dignity-of-michelangelos-david-raises-questions-about-freedom-of-expression">see also</a>), ruling that the Florence gallery that owns Michelangelo’s David also owns “image rights” to the statue, can require licensing fees for any image depicting it, and can impose additional fines for presenting it in a distorted way. It is perhaps important to note that the statue, created in 1504, was on outdoor exhibit in a public square in Florence until 1873, and a replica is still visible there.</p>
</li>
<li>
<p>Somehow I ended up with the following two images adjacent in my Mastodon favorites list <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112272173752385277">\(\mathbb{M}\)</a>):</span> <a href="https://mathstodon.xyz/@foldworks/112240192935752642">the ceiling of a museum in Uzbekistan, intricately decorated by two sizes of 60°-90°-120°-90° kites</a>, and <a href="https://mastodon.gamedev.place/@curved_ruler/112235975539402635">a tiling by approximate 60°-120° diamonds, distorted to reduce in size as they approach a central limit</a>. Both remind me of my old work on <a href="/blog/2012/07/23/diamonds-kites-and.html">diamond-kite meshes</a>, which combine these two shapes to allow the mesh to vary in scale without distortion. See also <a href="https://mathstodon.xyz/@ngons/112279033733852966">another image inspired by this post</a>.</p>
</li>
<li>
<p><a href="https://www.chronicle.com/article/one-scientist-didnt-turn-in-his-grant-reports-now-federal-agencies-are-withholding-grants-for-an-entire-university">US government holds back new grants to an entire university</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112278926930668605">\(\mathbb{M}\)</a>,</span> <a href="https://archive.is/WXtQa">archived</a>), the University of California, San Diego, because some guy retired in 2022 without finishing his final grant reports. The story says he’s doing them now, but what would have happened if he got hit by a bus?</p>
</li>
</ul>
<p class="authors">By David Eppstein</p>
</body></html>
David Eppsteinhttps://11011110.github.io/blog/Richard Lipton: Guggenheim 2024https://rjlipton.com/?p=243612024-04-15T16:30:46+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>Edward Hirsch is the President of the John Simon Guggenheim Memorial Foundation and has lead it since 2003. He just <a href="https://www.gf.org/news/foundation-news/announcing-the-2024-guggenheim-fellows/">announced</a> the appointment of 188 Guggenheim Fellowships to a distinguished and diverse group of culture-creators working across 52 disciplines. Chosen from a pool of almost 3,000 applicants, the Class of 2024 Guggenheim Fellows was tapped on the basis of prior career achievement and exceptional promise. As established in 1925 by founder Senator Simon Guggenheim, each fellow receives a monetary stipend to pursue independent work at the highest level under “the freest possible conditions.</p>
<p><a href="https://rjlipton.com/2024/04/15/guggenheim-2024/eh/" rel="attachment wp-att-24369"><img fetchpriority="high" decoding="async" data-attachment-id="24369" data-permalink="https://rjlipton.com/2024/04/15/guggenheim-2024/eh/" data-orig-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?fit=224%2C224&ssl=1" data-orig-size="224,224" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="eh" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?fit=224%2C224&ssl=1" data-large-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?fit=224%2C224&ssl=1" src="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?resize=224%2C224&ssl=1" alt="" width="224" height="224" class="aligncenter size-full wp-image-24369" srcset="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?w=224&ssl=1 224w, https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?resize=150%2C150&ssl=1 150w, https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/eh.jpeg?resize=200%2C200&ssl=1 200w" sizes="(max-width: 224px) 100vw, 224px" data-recalc-dims="1"></a></p>
<p>Edward Hirsch is a poet and shows that the depth and breadth of Guggenheim’s is large, See <a href="https://poets.org/poem/task}{https://poets.org/poem/task">here</a></p>
<p>The range of the 188 new appointments is from 84 different academic institutions, 41 states and the District of Columbia, and four Canadian provinces are represented in this year’s class of Fellows, who range in age from 28 to 89. Many Fellows’ projects engage with issues like disability activism, climate change, identity, incarceration, AI, and democracy.</p>
<p><strong> Computer Science</strong></p>
<p>Gene Tsudik is the 2024 <a href="https://news.uci.edu/2024/04/11/wang-feng-and-gene-tsudik-are-named-2024-guggenheim-fellows/">representative</a> for computer science. He is a Distinguished Professor of Computer Science at the University of California, Irvine (UCI). He obtained his PhD in Computer Science from USC in 1991. Before coming to UCI in 2000, he was at IBM Zurich Research Laboratory (1991-1996) and USC/ISI (1996-2000). His research interests include many topics in security, privacy and applied cryptography. </p>
<p><a href="https://rjlipton.com/2024/04/15/guggenheim-2024/gt2/" rel="attachment wp-att-24379"><img decoding="async" data-attachment-id="24379" data-permalink="https://rjlipton.com/2024/04/15/guggenheim-2024/gt2/" data-orig-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?fit=225%2C225&ssl=1" data-orig-size="225,225" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="gt2" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?fit=225%2C225&ssl=1" data-large-file="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?fit=225%2C225&ssl=1" src="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?resize=225%2C225&ssl=1" alt="" width="225" height="225" class="aligncenter size-full wp-image-24379" srcset="https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?w=225&ssl=1 225w, https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?resize=150%2C150&ssl=1 150w, https://i0.wp.com/rjlipton.com/wp-content/uploads/2024/04/gt2.jpeg?resize=200%2C200&ssl=1 200w" sizes="(max-width: 225px) 100vw, 225px" data-recalc-dims="1"></a></p>
<p>Tsudik is the only computer scientist to be awarded a Guggenheim Fellowship this year, and he intends to use his fellowship funding to bootstrap a new line of research on building IoT devices resilient against devastating large-scale malware infestations that have become all too common in recent years. See his <a href="https://arxiv.org/pdf/2309.03574.pdf">https://arxiv.org/pdf/2309.03574.pdf</a>} with Sashidhar Jakkamsetti and Youngil Kim.</p>
<p>He is also the author of the first crypto-poem published as a refereed <a href="https://ics.uci.edu/~gts/scn-2016-crypto-poetry.pdf">paper</a>. Perhaps this is why Edward Hirsch helped select him? Perhaps not? Tsudik also suffers from two incurable academic diseases: “Research ADHD” and “Munchausen-without-Proxy.” Congrats to him as a winner of a Guggenheim.</p>
<p><strong>Open Problems</strong></p>
<p>I enjoyed some of his different points about computing. See this:</p>
<blockquote><p>
<em><br>
There are two equivalence classes of idiots:<br>
Those who yearn for the past that never was.<br>
AND<br>
Those who dream of the future that never will be.</em>
</p></blockquote>
<p>Congrats again.</p>
<p class="authors">By rjlipton</p>
</body></html>
Richard Liptonhttps://rjlipton.wpcomstaging.comBen Recht: Unreal Is Herehttps://www.argmin.net/p/unreal-is-here2024-04-15T14:02:41+00:00
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset">
<picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg" width="1100" height="219" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/e4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":219,"width":1100,"resizeWidth":null,"bytes":329826,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":null}' class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4f95475-cdb8-49bc-a813-851cfdf6f320_1100x219.jpeg 1456w" sizes="100vw" fetchpriority="high"></source></picture><div></div>
</div></a></figure></div>
<p>One of the more offensive things about the new AI audio generators is their outputs sound like the schlockiest imaginable demo tracks advertising sample packs. In talking to some of my friends, I realized that many people aren’t aware of how far these sample packs have come. Which means I could probably raise VC funding claiming a simple search engine for the Native Instruments webpage is AI.</p>
<p><a href="https://www.native-instruments.com/en/">Native Instruments</a> is a German music software company that has been a dominant force in sound generators for decades. Probably their most famous software is <a href="https://www.native-instruments.com/en/products/komplete/synths/massive/">Massive</a>, which brought us all of the obnoxious dubstep wobble drops of the 2010s. <a href="https://www.youtube.com/watch?v=l4ciFmOCqIE">Remember Skrillex</a>? Good times.</p>
<p>In any event, NI is a massive clearinghouse of sample packs and sound generating tools. You should go listen to the demos to be floored by what you can make with a laptop. Each of their product webpages embeds a dozen demo songs showcasing the different moods you can evoke with each sample pack. For instance, check out their newest pack, “<a href="https://www.native-instruments.com/en/products/komplete/cinematic/action-woodwinds/">Action Woodwinds.</a>” Here’s a movie soundtrack made entirely from samples</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/tracks/1774212441","title":"Stealing The Spaceship (Benny Oschmann) - ACTION WOODWINDS - Demo by SONUSCORE","description":"Action Woodwinds provides the lively sound of a full woodwind ensemble, coupled with unparalleled creative flexibility. Craft thrilling moments with an extensive collection of innovative orchestral phrases, each brimming with drama, tension, and impact. The intuitive engine streamlines your process, whether you’re building blockbuster cues, fantasy soundtracks, or action-packed trailers.\n\nLearn more here: https://www.sonuscore.com/shop/action-woodwinds/","thumbnail_url":"https://i1.sndcdn.com/artworks-NL2prjwuh1vd8U1m-CpfjAw-t500x500.jpg","author_name":"SONUSCORE","author_url":"https://soundcloud.com/sonuscore","targetUrl":"https://soundcloud.com/sonuscore/stealing-the-spaceship-benny"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F1774212441" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>And you don’t have to work for the company to generate soundtracks like this. Here’s a project by an unaffiliated film composer, <a href="https://www.imdb.com/name/nm10256380/">Ashraf Elziftawi</a>, who combines multiple symphonic elements in his scores:</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/tracks/245579497","title":"Mountain Of Gods (NI Strings/Brass Symphony Series Demo) by Ashraf Elziftawi - أشرف الزفتاوى","description":"A demo i put together showing how i would use the new native instruments strings/Brass symphony series within my style of writing.\n\nHere is the video for the session https://www.youtube.com/watch?v=g-qWsAQKJXQ","thumbnail_url":"https://i1.sndcdn.com/artworks-000145984556-qjk072-t500x500.jpg","author_name":"Ashraf Elziftawi - أشرف الزفتاوى","author_url":"https://soundcloud.com/ash-ebrahim","targetUrl":"https://soundcloud.com/ash-ebrahim/mountain-of-gods-ni-stringsbrass-symphony-series-demo"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F245579497" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>All of this is assembled in software, using Apple’s Logic Pro audio production software (200 dollars) and Native Instrument’s Kontakt (100 dollars). <a href="https://www.native-instruments.com/en/products/komplete/samplers/kontakt-7/">Kontakt</a> is what I used to call a “sampler,” but it’s so much more sophisticated than the clunky device I owned in the 1990s. NI now more fittingly calls Kontakt a “virtual instrument platform.” Every sound is deeply editable and configurable, and once you get a few virtual instruments together, you can create unique, compelling songs. If you pay NI 500 additional dollars, <a href="https://www.native-instruments.com/en/products/komplete/bundles/komplete-14-standard/">you get dozens of sample packs and synthesizers on top of Kontakt.</a></p>
<p>Ebrahim has some youtubes demonstrating his workflow: </p>
<div id="youtube2-g-qWsAQKJXQ" class="youtube-wrap" data-attrs='{"videoId":"g-qWsAQKJXQ","startTime":null,"endTime":null}' data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/g-qWsAQKJXQ?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div>
<p>Oh, those voices? Fake! Err, I mean, virtual! Here is a track of just vocal samples from the "<a href="https://www.native-instruments.com/en/products/nks-partners/soundiron/voices-of-rapture/">Voices of Rapture</a>” pack. </p>
<div class="soundcloud-wrap" data-attrs="{"url":"https://api.soundcloud.com/tracks/312399051","title":"Dominik Spychalski - Six (Naked) - Soundiron Voices Of Rapture by SOUNDIRON","description":"Soundiron Voices of Rapture\nhttps://soundiron.com/products/voices-of-rapture\n\nVoices of Rapture is a single combined Kontakt Player library with the soprano, alto, tenor and bass all together, and with some new content and features. It's easier to use, more streamlined and refined all around.","thumbnail_url":"https://i1.sndcdn.com/artworks-000212563294-m2pbrh-t500x500.jpg","author_name":"SOUNDIRON","author_url":"https://soundcloud.com/soundiron","targetUrl":"https://soundcloud.com/soundiron/dominik-spychalski-six-naked-soundiron-voices-of-rapture"}" data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F312399051" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>Not surprisingly, you can do many other instruments too. Here are some guitars, all sampled and resynthesized.</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/tracks/403530726","title":"KOMPLETE > SESSION GUITARIST - ELECTRIC SUNBURST - Trip To Miami Demo by NativeInstruments","description":"Four demos from ELECTRIC SUNBURST, a new release from NATIVE INSTRUMENTS which precisely captures a timeless, playable electric guitar, ideal for modern productions.\n\nnative-instruments.com/electricsunburst","thumbnail_url":"https://i1.sndcdn.com/artworks-000306421788-9uqtht-t500x500.jpg","author_name":"NativeInstruments","author_url":"https://soundcloud.com/nativeinstruments","targetUrl":"https://soundcloud.com/nativeinstruments/komplete-session-guitarist"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F403530726" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>If that’s not your vibe, what about a demo of someone using only samples to create a Mogwai song?</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/tracks/1397271313","title":"ModeAudio - Lowlands by Ableton","description":"Immerse yourself in the hypnotic and experimental realm of post-rock with this rich palette of multisampled instruments, sounds and loops that capture the genre’s distinctive textures and tones.\n\nAvailable now on the Ableton Pack Store: https://www.ableton.com/en/packs/post-rock/","thumbnail_url":"https://i1.sndcdn.com/artworks-GC4KubeN2RkRev3p-osKodw-t500x500.jpg","author_name":"Ableton","author_url":"https://soundcloud.com/ableton","targetUrl":"https://soundcloud.com/ableton/modeaudio-lowlands-master"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F1397271313" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>No instruments were touched in the making of this production. I mean, compare that one to an actual Mogwai song. </p>
<iframe class="spotify-wrap" data-attrs='{"image":"https://i.scdn.co/image/ab67616d0000b2735df11c39821795f5fe971627","title":"Folk Death 95","subtitle":"Mogwai","description":"","url":"https://open.spotify.com/track/5DuNDqbO0cvlNA7bbA381X","belowTheFold":true,"noScroll":false}' src="https://open.spotify.com/embed/track/5DuNDqbO0cvlNA7bbA381X" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>It’s pretty close!</p>
<p>Maybe you prefer strummed acoustics?</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/playlists/275959779","title":"KOMPLETE > STRUMMED ACOUSTIC 2 by NativeInstruments","description":"The Session Guitarist series returns with STRUMMED ACOUSTIC 2 putting the pure sound of two vintage acoustic guitars at your fingertips. The iconic 6- and 12-string guitars were played by skilled session musicians and captured in painstaking detail by our friends at drumasonic (http://www.drumasonic.com/). The result is a versatile, superbly playable Kontakt instrument that lends itself perfectly to almost any production task. Check out these demo tracks to hear the range and the realism.\n \nFor more: www.native-instruments.com/strummedacoustic2","thumbnail_url":"https://i1.sndcdn.com/artworks-000193829887-hqv3qh-t500x500.jpg","author_name":"NativeInstruments","author_url":"https://soundcloud.com/nativeinstruments","targetUrl":"https://soundcloud.com/nativeinstruments/sets/komplete-strummed-acoustic-2"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Fplaylists%2F275959779" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p><a href="http://www.orangetreesamples.com">Banjos</a>, you say?</p>
<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs='{"label":null,"mediaUploadId":"75d9288e-490a-40f8-bd49-89a4dbf6710e","duration":107.52,"isEditorNode":true}'></div>
<p>What about Guzheng?</p>
<div class="soundcloud-wrap" data-attrs='{"url":"https://api.soundcloud.com/tracks/2631805","title":"Soniccouture Guzheng - Buried In Ice (Featuring EBow guitar) by Ableton","description":"Featuring EBow guitar","thumbnail_url":"https://soundcloud.com/images/fb_placeholder.png","author_name":"Ableton","author_url":"https://soundcloud.com/ableton","targetUrl":"https://soundcloud.com/ableton/soniccouture-guzheng-buried-in-ice?in=ableton/sets"}' data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F2631805" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div>
<p>You can browse more example sounds on <a href="https://www.native-instruments.com/en/catalog/software/">Native Instruments’ webpage</a> or <a href="https://www.ableton.com/en/packs/">Ableton’s Packs page</a>. There are companies that will even sell you subscription services for samples, like <a href="https://splice.com/sounds/packs/sample-magic/shoegaze/samples">Splice</a>. Once you get a sense of the sort of possibilities that are out there, you start to realize that if you want to mimic a style, there’s a sample pack waiting for you. And if you want to merge a couple of different styles, that will just need two sample packs. Any sound you want for whatever project you are working on is within your reach. It will only take a little practice. And maybe a little money.</p>
<p>Companies like Native Instruments have streamlined creatives’ access. They have built a future where anyone can make great music. Whether you're a shower singer or a charting artist, companies like Native Instruments have broken barriers between the musician and the songs they dream of making. No instruments needed, just imagination, a computer, and a checkbook. It’s from your mind to music. </p>
<p>Did I plagiarize that last paragraph? <a href="https://suno.com/about">You bet I did</a>.</p>
<p>Because, Native Instruments must be a 100 billion dollar company, right? They make it easy for the common person to generate all sorts of soundscapes. Well, actually, they were consumed by a private equity firm in 2021, merged with other music software companies, and subsequently downsized. But unicorn AI firms can still scrape their SoundCloud page. I guess that’s something.</p>
<p class="button-wrapper" data-attrs='{"url":"https://www.argmin.net/subscribe?","text":"Subscribe now","action":null,"class":null}' data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.argmin.net/subscribe?"><span>Subscribe now</span></a></p>
<p class="authors">By Ben Recht</p>
</body></html>
Ben Rechthttp://benjamin-recht.github.io/