Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Wednesday, November 20

For what d is the following true: For all 2-colorings of \(R^d\) has a mono unit square (Answering(?) the Question)

from Computational Complexity

 In my last post (see here) I invited you to work on the following question:

Find a \(d\) such that

--There is a 2-coloring of \(R^d\) with no mono unit square.

--For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

Actually I should have phrased my question as What do we know about d?  

Here is what we know

a) \(d \ge 2\).  There is a 2-coloring of  \(R^2\) with no mono unit square. This is easy and I leave to you. 

b) \(d\le 5\). For all 2-colorings of \(R^6\) there is a mono unit square. I will give pointers to the relevant papers and to my slides later in this post.

c) \(d\le 4\). For all 2-colorings of \(R^5\) there is a mono unit square. This is by an observation about the proof for \(R^6\). It will be in the slides about \(R^6\).

d) \(d\le 3\). This is in a paper that the reader Dom emailed me a pointer to. Dom is better at Google Search than I am. The link is here.

MY SLIDES:

\(K_6\) is the complete graph on 6 vertices. We will be looking at 2-colorings of its edges

\(C_4\) is the cycle on 4 vertices. A mono \(C_4\) has all four edges the same color.

We need a result by Chvtal and Harary in this paper here.

Lemma: For all 2-colorings of the edges of \(K_6\) there is a mono \(C_4\).

The proof appears both in their paper,  here, and on slides I wrote here. 

Stefan Burr used this to prove the following theorem.

Thm: For all 2-colorings of \(R^6\) there is a mono unit square. 

The proof was appears (with credit given to Stefan Burr) in a paper by Erdos, Graham, Montgomery, Rothchild, Spencer, Straus, here, and on slides I wrote here.

Random Points

1) It is open what happens in \(R^3\). 

2) The proof for \(R^6\) uses very little geometry. Dom had a proof for \(R^6\) in a comment on my last post that used geometry. The proof for \(R^4\) uses geometry. 

3) An ill-defined open question: Find a proof that every 2-coloring of \(R^4\) has a mono unit square that does not use that much geometry and so I can make slides about it more easily.



By gasarch

 In my last post (see here) I invited you to work on the following question:

Find a \(d\) such that

--There is a 2-coloring of \(R^d\) with no mono unit square.

--For all 2-colorings of \(R^{d+1}\) there is a mono unit square. 

Actually I should have phrased my question as What do we know about d?  

Here is what we know

a) \(d \ge 2\).  There is a 2-coloring of  \(R^2\) with no mono unit square. This is easy and I leave to you. 

b) \(d\le 5\). For all 2-colorings of \(R^6\) there is a mono unit square. I will give pointers to the relevant papers and to my slides later in this post.

c) \(d\le 4\). For all 2-colorings of \(R^5\) there is a mono unit square. This is by an observation about the proof for \(R^6\). It will be in the slides about \(R^6\).

d) \(d\le 3\). This is in a paper that the reader Dom emailed me a pointer to. Dom is better at Google Search than I am. The link is here.

MY SLIDES:

\(K_6\) is the complete graph on 6 vertices. We will be looking at 2-colorings of its edges

\(C_4\) is the cycle on 4 vertices. A mono \(C_4\) has all four edges the same color.

We need a result by Chvtal and Harary in this paper here.

Lemma: For all 2-colorings of the edges of \(K_6\) there is a mono \(C_4\).

The proof appears both in their paper,  here, and on slides I wrote here

Stefan Burr used this to prove the following theorem.

Thm: For all 2-colorings of \(R^6\) there is a mono unit square. 

The proof was appears (with credit given to Stefan Burr) in a paper by Erdos, Graham, Montgomery, Rothchild, Spencer, Straus, here, and on slides I wrote here.

Random Points

1) It is open what happens in \(R^3\). 

2) The proof for \(R^6\) uses very little geometry. Dom had a proof for \(R^6\) in a comment on my last post that used geometry. The proof for \(R^4\) uses geometry. 

3) An ill-defined open question: Find a proof that every 2-coloring of \(R^4\) has a mono unit square that does not use that much geometry and so I can make slides about it more easily.



By gasarch

TR24-184 | Coherence in Property Testing: Quantum-Classical Collapses and Separations | Fernando Jeronimo, Nir Magrafta, Joseph Slote, Pei Wu

from ECCC Papers

Understanding the power and limitations of classical and quantum information, and how they differ, is an important endeavor. On the classical side, property testing of distributions is a fundamental task: a tester, given samples of a distribution over a typically large domain such as $\{0,1\}^n$, is asked to verify properties of the distribution. A key property of interest in this paper is the support size both of distributions, a central problem classically [Valiant and Valiant STOC'11], as well, as of quantum states. Classically, even given $2^{n/16}$ samples, no tester can distinguish between distributions of support size $2^{n/8}$ from $2^{n/4}$ with probability better than $2^{-\Theta(n)}$, even with the promise that they are flat distributions. In the quantum setting, quantum states can be in a coherent superposition of many states of $\{0,1\}^n$, providing a global description of probability distributions. One may ask if coherence can enhance property testing. A natural way to encode a flat distribution is via the subset states, $|\phi_S \rangle = 1/\sqrt{| S |} \sum_{i \in S} |i\rangle$. We show that coherence alone is not enough to improve the testability of support size. (1) Coherence limitations. Given $2^{n/16}$ copies, no tester can distinguish between subset states of size $2^{n/8}$ from $2^{n/4}$ with probability better than $2^{-\Theta(n)}$. Our result is more general and establishes the indistinguishability between the subset states and the Haar random states leading to new constructions of pseudorandom and pseudoentangled states, resolving an open problem of [Ji, Liu and Song, CRYPTO'18]. The hardness persists even when allowing multiple public-coin AM provers for a classical tester. (2) Classical hardness with provers. Given $2^{O(n)}$ samples from a classical distribution and $2^{O(n)}$ communication with multiple independent AM provers, no classical tester can estimate the support size up to factors $2^{\Omega(n)}$ with probability better than $2^{-\Theta(n)}$. Our hardness result is tight. In contrast, coherent subset state proofs suffice to improve testability exponentially, (3) Quantum advantage with certificates. With polynomially many copies and subset state proofs, a tester can approximate the support size of a subset state of arbitrary size. Some structural assumption on the quantum proofs is required since we show that (4) Collapse of QMA. A general proof cannot information-theoretically improve testability of any quantum property whatsoever. Our results highlight both the power and limitations of coherence in property testing, establishing exponential quantum-classical separations across various parameters. We also show several connections and implications of the study of property testing, in particular, in establishing quantum-to-quantum state transformation lower bounds, and to disentangler lower bounds.

Understanding the power and limitations of classical and quantum information, and how they differ, is an important endeavor. On the classical side, property testing of distributions is a fundamental task: a tester, given samples of a distribution over a typically large domain such as $\{0,1\}^n$, is asked to verify properties of the distribution. A key property of interest in this paper is the support size both of distributions, a central problem classically [Valiant and Valiant STOC'11], as well, as of quantum states. Classically, even given $2^{n/16}$ samples, no tester can distinguish between distributions of support size $2^{n/8}$ from $2^{n/4}$ with probability better than $2^{-\Theta(n)}$, even with the promise that they are flat distributions. In the quantum setting, quantum states can be in a coherent superposition of many states of $\{0,1\}^n$, providing a global description of probability distributions. One may ask if coherence can enhance property testing. A natural way to encode a flat distribution is via the subset states, $|\phi_S \rangle = 1/\sqrt{| S |} \sum_{i \in S} |i\rangle$. We show that coherence alone is not enough to improve the testability of support size. (1) Coherence limitations. Given $2^{n/16}$ copies, no tester can distinguish between subset states of size $2^{n/8}$ from $2^{n/4}$ with probability better than $2^{-\Theta(n)}$. Our result is more general and establishes the indistinguishability between the subset states and the Haar random states leading to new constructions of pseudorandom and pseudoentangled states, resolving an open problem of [Ji, Liu and Song, CRYPTO'18]. The hardness persists even when allowing multiple public-coin AM provers for a classical tester. (2) Classical hardness with provers. Given $2^{O(n)}$ samples from a classical distribution and $2^{O(n)}$ communication with multiple independent AM provers, no classical tester can estimate the support size up to factors $2^{\Omega(n)}$ with probability better than $2^{-\Theta(n)}$. Our hardness result is tight. In contrast, coherent subset state proofs suffice to improve testability exponentially, (3) Quantum advantage with certificates. With polynomially many copies and subset state proofs, a tester can approximate the support size of a subset state of arbitrary size. Some structural assumption on the quantum proofs is required since we show that (4) Collapse of QMA. A general proof cannot information-theoretically improve testability of any quantum property whatsoever. Our results highlight both the power and limitations of coherence in property testing, establishing exponential quantum-classical separations across various parameters. We also show several connections and implications of the study of property testing, in particular, in establishing quantum-to-quantum state transformation lower bounds, and to disentangler lower bounds.

Postdoc at Institute of Science and Technology Austria (apply by December 31, 2024)

from CCI: jobs

The algorithms group at the Institute of Science and Technology Austria (ISTA) is offering postdoctorial positions in combinatorial algorithms, especially graph algorithms and differential privacy. Please send your CV, research statement, and 2 – 3 recommendation letters. Website: ista.ac.at/en/job/postdoc-research-group-monika-henzinger/ Email: monika.henzinger@ist.ac.at

The algorithms group at the Institute of Science and Technology Austria (ISTA) is offering postdoctorial positions in combinatorial algorithms, especially graph algorithms and differential privacy. Please send your CV, research statement, and 2 – 3 recommendation letters.

Website: https://ista.ac.at/en/job/postdoc-research-group-monika-henzinger/
Email: monika.henzinger@ist.ac.at

By shacharlovett

TR24-183 | Improved PIR Schemes using Matching Vectors and Derivatives | Fatemeh Ghasemi, Swastik Kopparty, Madhu Sudan

from ECCC Papers

In this paper, we construct new t-server Private Information Retrieval (PIR) schemes with communication complexity subpolynomial in the previously best known, for all but finitely many t. Our results are based on combining derivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based PIRs of Yekhanin and Efremenko. Previously such a combination was achieved in an ingenious way by Dvir and Gopi, using polynomials and derivatives over certain exotic rings, en route to their fundamental result giving the first 2-server PIR with subpolynomial communication. Our improved PIRs are based on two ingredients: - We develop a new and direct approach to combine derivatives with Matching Vector based PIRs. This approach is much simpler than that of Dvir-Gopi: it works over the same field as the original PIRs, and only uses elementary properties of polynomials and derivatives. - A key subproblem that arises in the above approach is a higher-order polynomial interpolation problem. We show how “sparse S-decoding polynomials”, a powerful tool from the original constructions of Matching Vector PIRs, can be used to solve this higher-order polynomial interpolation problem using surprisingly few higer-order evaluations. Using the known sparse S-decoding polynomials in combination with our ideas leads to our improved PIRs. Notably, we get a 3-server PIR scheme with communication $2^{O^{\sim}( (\log n)^{1/3}) }$, improving upon the previously best known communication of $2^{O^{\sim} \sqrt{\log n})}$ due to Efremenko.

In this paper, we construct new t-server Private Information Retrieval (PIR) schemes with communication complexity subpolynomial in the previously best known, for all but finitely many t. Our results are based on combining derivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based PIRs of Yekhanin and Efremenko. Previously such a combination was achieved in an ingenious way by Dvir and Gopi, using polynomials and derivatives over certain exotic rings, en route to their fundamental result giving the first 2-server PIR with subpolynomial communication. Our improved PIRs are based on two ingredients: - We develop a new and direct approach to combine derivatives with Matching Vector based PIRs. This approach is much simpler than that of Dvir-Gopi: it works over the same field as the original PIRs, and only uses elementary properties of polynomials and derivatives. - A key subproblem that arises in the above approach is a higher-order polynomial interpolation problem. We show how “sparse S-decoding polynomials”, a powerful tool from the original constructions of Matching Vector PIRs, can be used to solve this higher-order polynomial interpolation problem using surprisingly few higer-order evaluations. Using the known sparse S-decoding polynomials in combination with our ideas leads to our improved PIRs. Notably, we get a 3-server PIR scheme with communication $2^{O^{\sim}( (\log n)^{1/3}) }$, improving upon the previously best known communication of $2^{O^{\sim} \sqrt{\log n})}$ due to Efremenko.

Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations

from arXiv: Computational Complexity

Authors: Kiril Bangachev, Guy Bresler, Stefan Tiegel, Vinod Vaikuntanathan

We present a polynomial-time reduction from solving noisy linear equations over $\mathbb{Z}/q\mathbb{Z}$ in dimension $\Theta(k\log n/\mathsf{poly}(\log k,\log q,\log\log n))$ with a uniformly random coefficient matrix to noisy linear equations over $\mathbb{Z}/q\mathbb{Z}$ in dimension $n$ where each row of the coefficient matrix has uniformly random support of size $k$. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings. 1) Assuming the $\ell$-dimensional (dense) LWE over a polynomial-size field takes time $2^{\Omega(\ell)}$, $k$-sparse LWE in dimension $n$ takes time $n^{\Omega({k}/{(\log k \cdot (\log k + \log \log n))})}.$ 2) Assuming the $\ell$-dimensional (dense) LPN over $\mathbb{F}_2$ takes time $2^{\Omega(\ell/\log \ell)}$, $k$-sparse LPN in dimension $n$ takes time $n^{\Omega(k/(\log k \cdot (\log k + \log \log n)^2))}~.$ These running time lower bounds are nearly tight as both sparse problems can be solved in time $n^{O(k)},$ given sufficiently many samples. We further give a reduction from $k$-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-$k$ rank-$2^{k-1}$ noisy tensor completion in $\mathbb{R}^{n^{\otimes k}}$ takes time $n^{\Omega(k/ \log k \cdot (\log k + \log \log n))}$, assuming the exponential hardness of standard worst-case lattice problems.

Authors: Kiril Bangachev, Guy Bresler, Stefan Tiegel, Vinod Vaikuntanathan

We present a polynomial-time reduction from solving noisy linear equations over $\mathbb{Z}/q\mathbb{Z}$ in dimension $\Theta(k\log n/\mathsf{poly}(\log k,\log q,\log\log n))$ with a uniformly random coefficient matrix to noisy linear equations over $\mathbb{Z}/q\mathbb{Z}$ in dimension $n$ where each row of the coefficient matrix has uniformly random support of size $k$. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings. 1) Assuming the $\ell$-dimensional (dense) LWE over a polynomial-size field takes time $2^{\Omega(\ell)}$, $k$-sparse LWE in dimension $n$ takes time $n^{\Omega({k}/{(\log k \cdot (\log k + \log \log n))})}.$ 2) Assuming the $\ell$-dimensional (dense) LPN over $\mathbb{F}_2$ takes time $2^{\Omega(\ell/\log \ell)}$, $k$-sparse LPN in dimension $n$ takes time $n^{\Omega(k/(\log k \cdot (\log k + \log \log n)^2))}~.$ These running time lower bounds are nearly tight as both sparse problems can be solved in time $n^{O(k)},$ given sufficiently many samples. We further give a reduction from $k$-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-$k$ rank-$2^{k-1}$ noisy tensor completion in $\mathbb{R}^{n^{\otimes k}}$ takes time $n^{\Omega(k/ \log k \cdot (\log k + \log \log n))}$, assuming the exponential hardness of standard worst-case lattice problems.

Empowering Large Scale Quantum Circuit Development: Effective Simulation of Sycamore Circuits

from arXiv: Computational Complexity

Authors: Venkateswaran Kasirajan, Torey Battelle, Bob Wold

Simulating quantum systems using classical computing equipment has been a significant research focus. This work demonstrates that circuits as large and complex as the random circuit sampling (RCS) circuits published as a part of Google's pioneering work [4-7] claiming quantum supremacy can be effectively simulated with high fidelity on classical systems commonly available to developers, using the universal quantum simulator included in the Quantum Rings SDK, making this advancement accessible to everyone. This study achieved an average linear cross-entropy benchmarking (XEB) score of 0.678, indicating a strong correlation with ideal quantum simulation and exceeding the XEB values currently reported for the same circuits today while completing circuit execution in a reasonable timeframe. This capability empowers researchers and developers to build, debug, and execute large-scale quantum circuits ahead of the general availability of low-error rate quantum computers and invent new quantum algorithms or deploy commercial-grade applications.

Authors: Venkateswaran Kasirajan, Torey Battelle, Bob Wold

Simulating quantum systems using classical computing equipment has been a significant research focus. This work demonstrates that circuits as large and complex as the random circuit sampling (RCS) circuits published as a part of Google's pioneering work [4-7] claiming quantum supremacy can be effectively simulated with high fidelity on classical systems commonly available to developers, using the universal quantum simulator included in the Quantum Rings SDK, making this advancement accessible to everyone. This study achieved an average linear cross-entropy benchmarking (XEB) score of 0.678, indicating a strong correlation with ideal quantum simulation and exceeding the XEB values currently reported for the same circuits today while completing circuit execution in a reasonable timeframe. This capability empowers researchers and developers to build, debug, and execute large-scale quantum circuits ahead of the general availability of low-error rate quantum computers and invent new quantum algorithms or deploy commercial-grade applications.

Multipacking in Euclidean Plane

from arXiv: Computational Geometry

Authors: Arun Kumar Das, Sandip Das, Sk Samim Islam, Ritam M Mitra, Bodhayan Roy

We initiate the study of multipacking problems for geometric point sets with respect to their Euclidean distances. We consider a set of $n$ points $P$ and define $N_s[v]$ as the subset of $P$ that includes the $s$ nearest points of $v \in P$ and the point $v$ itself. We assume that the \emph{$s$-th neighbor} of each point is unique, for every $s \in \{0, 1, 2, \dots , n-1\}$. For a natural number $r \leq n$, an $r$-multipacking is a set $ M \subseteq P $ such that for each point $ v \in P $ and for every integer $ 1\leq s \leq r $, $|N_s[v]\cap M|\leq (s+1)/2$. The $r$-multipacking number of $ P $ is the maximum cardinality of an $r$-multipacking of $ P $ and is denoted by $ \MP_{r}(P) $. For $r=n-1$, an $r$-multipacking is called a multipacking and $r$-multipacking number is called as multipacking number. We study the problem of computing a maximum $r$-multipacking for point sets in $\mathbb{R}^2$. We show that a maximum $1$-multipacking can be computed in polynomial time but computing a maximum $2$-multipacking is NP complete. Further, we provide approximation and parameterized solutions to the $2$-multipacking problem.

Authors: Arun Kumar Das, Sandip Das, Sk Samim Islam, Ritam M Mitra, Bodhayan Roy

We initiate the study of multipacking problems for geometric point sets with respect to their Euclidean distances. We consider a set of $n$ points $P$ and define $N_s[v]$ as the subset of $P$ that includes the $s$ nearest points of $v \in P$ and the point $v$ itself. We assume that the \emph{$s$-th neighbor} of each point is unique, for every $s \in \{0, 1, 2, \dots , n-1\}$. For a natural number $r \leq n$, an $r$-multipacking is a set $ M \subseteq P $ such that for each point $ v \in P $ and for every integer $ 1\leq s \leq r $, $|N_s[v]\cap M|\leq (s+1)/2$. The $r$-multipacking number of $ P $ is the maximum cardinality of an $r$-multipacking of $ P $ and is denoted by $ \MP_{r}(P) $. For $r=n-1$, an $r$-multipacking is called a multipacking and $r$-multipacking number is called as multipacking number. We study the problem of computing a maximum $r$-multipacking for point sets in $\mathbb{R}^2$. We show that a maximum $1$-multipacking can be computed in polynomial time but computing a maximum $2$-multipacking is NP complete. Further, we provide approximation and parameterized solutions to the $2$-multipacking problem.

Testing classical properties from quantum data

from arXiv: Data Structures and Algorithms

Authors: Matthias C. Caro, Preksha Naik, Joseph Slote

Many properties of Boolean functions can be tested far more efficiently than the function can be learned. However, this advantage often disappears when testers are limited to random samples--a natural setting for data science--rather than queries. In this work we investigate the quantum version of this scenario: quantum algorithms that test properties of a function $f$ solely from quantum data in the form of copies of the function state for $f$. For three well-established properties, we show that the speedup lost when restricting classical testers to samples can be recovered by testers that use quantum data. For monotonicity testing, we give a quantum algorithm that uses $\tilde{\mathcal{O}}(n^2)$ function state copies as compared to the $2^{\Omega(\sqrt{n})}$ samples required classically. We also present $\mathcal{O}(1)$-copy testers for symmetry and triangle-freeness, comparing favorably to classical lower bounds of $\Omega(n^{1/4})$ and $\Omega(n)$ samples respectively. These algorithms are time-efficient and necessarily include techniques beyond the Fourier sampling approaches applied to earlier testing problems. These results make the case for a general study of the advantages afforded by quantum data for testing. We contribute to this project by complementing our upper bounds with a lower bound of $\Omega(1/\varepsilon)$ for monotonicity testing from quantum data in the proximity regime $\varepsilon\leq\mathcal{O}(n^{-3/2})$. This implies a strict separation between testing monotonicity from quantum data and from quantum queries--where $\tilde{\mathcal{O}}(n)$ queries suffice when $\varepsilon=\Theta(n^{-3/2})$. We also exhibit a testing problem that can be solved from $\mathcal{O}(1)$ classical queries but requires $\Omega(2^{n/2})$ function state copies, complementing a separation of the same magnitude in the opposite direction derived from the Forrelation problem.

Authors: Matthias C. Caro, Preksha Naik, Joseph Slote

Many properties of Boolean functions can be tested far more efficiently than the function can be learned. However, this advantage often disappears when testers are limited to random samples--a natural setting for data science--rather than queries. In this work we investigate the quantum version of this scenario: quantum algorithms that test properties of a function $f$ solely from quantum data in the form of copies of the function state for $f$. For three well-established properties, we show that the speedup lost when restricting classical testers to samples can be recovered by testers that use quantum data. For monotonicity testing, we give a quantum algorithm that uses $\tilde{\mathcal{O}}(n^2)$ function state copies as compared to the $2^{\Omega(\sqrt{n})}$ samples required classically. We also present $\mathcal{O}(1)$-copy testers for symmetry and triangle-freeness, comparing favorably to classical lower bounds of $\Omega(n^{1/4})$ and $\Omega(n)$ samples respectively. These algorithms are time-efficient and necessarily include techniques beyond the Fourier sampling approaches applied to earlier testing problems. These results make the case for a general study of the advantages afforded by quantum data for testing. We contribute to this project by complementing our upper bounds with a lower bound of $\Omega(1/\varepsilon)$ for monotonicity testing from quantum data in the proximity regime $\varepsilon\leq\mathcal{O}(n^{-3/2})$. This implies a strict separation between testing monotonicity from quantum data and from quantum queries--where $\tilde{\mathcal{O}}(n)$ queries suffice when $\varepsilon=\Theta(n^{-3/2})$. We also exhibit a testing problem that can be solved from $\mathcal{O}(1)$ classical queries but requires $\Omega(2^{n/2})$ function state copies, complementing a separation of the same magnitude in the opposite direction derived from the Forrelation problem.

Sorted Consecutive Occurrence Queries in Substrings

from arXiv: Data Structures and Algorithms

Authors: Waseem Akram, akuya Mieno

The string indexing problem is a fundamental computational problem with numerous applications, including information retrieval and bioinformatics. It aims to efficiently solve the pattern matching problem: given a text $T$ of length $n$ for preprocessing and a pattern $P$ of length $m$ as a query, the goal is to report all occurrences of $P$ as substrings of $T$. Navarro and Thankachan [CPM 2015, Theor. Comput. Sci. 2016] introduced a variant of this problem called the gap-bounded consecutive occurrence query, which reports pairs of consecutive occurrences of $P$ in $T$ such that their gaps (i.e., the distances between them) lie within a query-specified range $[g_1, g_2]$. Recently, Bille et al. [FSTTCS 2020, Theor. Comput. Sci. 2022] proposed the top-$k$ close consecutive occurrence query, which reports the $k$ closest consecutive occurrences of $P$ in $T$, sorted in non-descending order of distance. Both problems are optimally solved in query time with $O(n \log n)$-space data structures. In this paper, we generalize these problems to the range query model, which focuses only on occurrences of $P$ in a specified substring $T[a.. b]$ of $T$. Our contributions are as follows: (1) We propose an $O(n \log^2 n)$-space data structure that answers the range top-$k$ consecutive occurrence query in $O(|P| + \log\log n + k)$ time. (2) We propose an $O(n \log^{2+\epsilon} n)$-space data structure that answers the range gap-bounded consecutive occurrence query in $O(|P| + \log\log n + \mathit{output})$ time, where $\epsilon$ is a positive constant and $\mathit{output}$ denotes the number of outputs. Additionally, as by-products, we present algorithms for geometric problems involving weighted horizontal segments in a 2D plane, which are of independent interest.

Authors: Waseem Akram, akuya Mieno

The string indexing problem is a fundamental computational problem with numerous applications, including information retrieval and bioinformatics. It aims to efficiently solve the pattern matching problem: given a text $T$ of length $n$ for preprocessing and a pattern $P$ of length $m$ as a query, the goal is to report all occurrences of $P$ as substrings of $T$. Navarro and Thankachan [CPM 2015, Theor. Comput. Sci. 2016] introduced a variant of this problem called the gap-bounded consecutive occurrence query, which reports pairs of consecutive occurrences of $P$ in $T$ such that their gaps (i.e., the distances between them) lie within a query-specified range $[g_1, g_2]$. Recently, Bille et al. [FSTTCS 2020, Theor. Comput. Sci. 2022] proposed the top-$k$ close consecutive occurrence query, which reports the $k$ closest consecutive occurrences of $P$ in $T$, sorted in non-descending order of distance. Both problems are optimally solved in query time with $O(n \log n)$-space data structures. In this paper, we generalize these problems to the range query model, which focuses only on occurrences of $P$ in a specified substring $T[a.. b]$ of $T$. Our contributions are as follows: (1) We propose an $O(n \log^2 n)$-space data structure that answers the range top-$k$ consecutive occurrence query in $O(|P| + \log\log n + k)$ time. (2) We propose an $O(n \log^{2+\epsilon} n)$-space data structure that answers the range gap-bounded consecutive occurrence query in $O(|P| + \log\log n + \mathit{output})$ time, where $\epsilon$ is a positive constant and $\mathit{output}$ denotes the number of outputs. Additionally, as by-products, we present algorithms for geometric problems involving weighted horizontal segments in a 2D plane, which are of independent interest.

Learning multivariate Gaussians with imperfect advice

from arXiv: Data Structures and Algorithms

Authors: Arnab Bhattacharyya, Davin Choo, Philips George John, Themis Gouleakis

We revisit the problem of distribution learning within the framework of learning-augmented algorithms. In this setting, we explore the scenario where a probability distribution is provided as potentially inaccurate advice on the true, unknown distribution. Our objective is to develop learning algorithms whose sample complexity decreases as the quality of the advice improves, thereby surpassing standard learning lower bounds when the advice is sufficiently accurate. Specifically, we demonstrate that this outcome is achievable for the problem of learning a multivariate Gaussian distribution $N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ in the PAC learning setting. Classically, in the advice-free setting, $\tilde{\Theta}(d^2/\varepsilon^2)$ samples are sufficient and worst case necessary to learn $d$-dimensional Gaussians up to TV distance $\varepsilon$ with constant probability. When we are additionally given a parameter $\tilde{\boldsymbol{\Sigma}}$ as advice, we show that $\tilde{O}(d^{2-\beta}/\varepsilon^2)$ samples suffices whenever $\| \tilde{\boldsymbol{\Sigma}}^{-1/2} \boldsymbol{\Sigma} \tilde{\boldsymbol{\Sigma}}^{-1/2} - \boldsymbol{I_d} \|_1 \leq \varepsilon d^{1-\beta}$ (where $\|\cdot\|_1$ denotes the entrywise $\ell_1$ norm) for any $\beta > 0$, yielding a polynomial improvement over the advice-free setting.

Authors: Arnab Bhattacharyya, Davin Choo, Philips George John, Themis Gouleakis

We revisit the problem of distribution learning within the framework of learning-augmented algorithms. In this setting, we explore the scenario where a probability distribution is provided as potentially inaccurate advice on the true, unknown distribution. Our objective is to develop learning algorithms whose sample complexity decreases as the quality of the advice improves, thereby surpassing standard learning lower bounds when the advice is sufficiently accurate. Specifically, we demonstrate that this outcome is achievable for the problem of learning a multivariate Gaussian distribution $N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ in the PAC learning setting. Classically, in the advice-free setting, $\tilde{\Theta}(d^2/\varepsilon^2)$ samples are sufficient and worst case necessary to learn $d$-dimensional Gaussians up to TV distance $\varepsilon$ with constant probability. When we are additionally given a parameter $\tilde{\boldsymbol{\Sigma}}$ as advice, we show that $\tilde{O}(d^{2-\beta}/\varepsilon^2)$ samples suffices whenever $\| \tilde{\boldsymbol{\Sigma}}^{-1/2} \boldsymbol{\Sigma} \tilde{\boldsymbol{\Sigma}}^{-1/2} - \boldsymbol{I_d} \|_1 \leq \varepsilon d^{1-\beta}$ (where $\|\cdot\|_1$ denotes the entrywise $\ell_1$ norm) for any $\beta > 0$, yielding a polynomial improvement over the advice-free setting.

Weighted Envy Freeness With Limited Subsidies

from arXiv: Data Structures and Algorithms

Authors: Noga Klein Elmalem, Rica Gonen, Erel Segal-Halevi

We explore solutions for fairly allocating indivisible items among agents assigned weights representing their entitlements. Our fairness goal is weighted-envy-freeness (WEF), where each agent deems their allocated portion relative to their entitlement at least as favorable as any other's relative to their own. In many cases, achieving WEF necessitates monetary transfers, which can be modeled as third-party subsidies. The goal is to attain WEF with bounded subsidies. Previous work in the unweighted setting of subsidies relied on basic characterizations of EF that fail in the weighted settings. This makes our new setting challenging and theoretically intriguing. We present polynomial-time algorithms that compute WEF-able allocations with an upper bound on the subsidy per agent in three distinct additive valuation scenarios: (1) general, (2) identical, and (3) binary. When all weights are equal, our bounds reduce to the bounds derived in the literature for the unweighted setting.

Authors: Noga Klein Elmalem, Rica Gonen, Erel Segal-Halevi

We explore solutions for fairly allocating indivisible items among agents assigned weights representing their entitlements. Our fairness goal is weighted-envy-freeness (WEF), where each agent deems their allocated portion relative to their entitlement at least as favorable as any other's relative to their own. In many cases, achieving WEF necessitates monetary transfers, which can be modeled as third-party subsidies. The goal is to attain WEF with bounded subsidies. Previous work in the unweighted setting of subsidies relied on basic characterizations of EF that fail in the weighted settings. This makes our new setting challenging and theoretically intriguing. We present polynomial-time algorithms that compute WEF-able allocations with an upper bound on the subsidy per agent in three distinct additive valuation scenarios: (1) general, (2) identical, and (3) binary. When all weights are equal, our bounds reduce to the bounds derived in the literature for the unweighted setting.

Local Density and its Distributed Approximation

from arXiv: Data Structures and Algorithms

Authors: Aleksander Bjørn Christiansen, Ivor van der Hoog, Eva Rotenberg

The densest subgraph problem is a classic problem in combinatorial optimisation. Danisch, Chan, and Sozio propose a definition for \emph{local density} that assigns to each vertex $v$ a value $\rho^*(v)$. This local density is a generalisation of the maximum subgraph density of a graph. I.e., if $\rho(G)$ is the subgraph density of a finite graph $G$, then $\rho(G)$ equals the maximum local density $\rho^*(v)$ over vertices $v$ in $G$. They approximate the local density of each vertex with no theoretical (asymptotic) guarantees. We provide an extensive study of this local density measure. Just as with (global) maximum subgraph density, we show that there is a dual relation between the local out-degrees and the minimum out-degree orientations of the graph. We introduce the definition of the local out-degree $g^*(v)$ of a vertex $v$, and show it to be equal to the local density $\rho^*(v)$. We consider the local out-degree to be conceptually simpler, shorter to define, and easier to compute. Using the local out-degree we show a previously unknown fact: that existing algorithms already dynamically approximate the local density. Next, we provide the first distributed algorithms that compute the local density with provable guarantees: given any $\varepsilon$ such that $\varepsilon^{-1} \in O(poly \, n)$, we show a deterministic distributed algorithm in the LOCAL model where, after $O(\varepsilon^{-2} \log^2 n)$ rounds, every vertex $v$ outputs a $(1 + \varepsilon)$-approximation of their local density $\rho^*(v)$. In CONGEST, we show a deterministic distributed algorithm that requires $\text{poly}(\log n,\varepsilon^{-1}) \cdot 2^{O(\sqrt{\log n})}$ rounds, which is sublinear in $n$. As a corollary, we obtain the first deterministic algorithm running in a sublinear number of rounds for $(1+\varepsilon)$-approximate densest subgraph detection in the CONGEST model.

Authors: Aleksander Bjørn Christiansen, Ivor van der Hoog, Eva Rotenberg

The densest subgraph problem is a classic problem in combinatorial optimisation. Danisch, Chan, and Sozio propose a definition for \emph{local density} that assigns to each vertex $v$ a value $\rho^*(v)$. This local density is a generalisation of the maximum subgraph density of a graph. I.e., if $\rho(G)$ is the subgraph density of a finite graph $G$, then $\rho(G)$ equals the maximum local density $\rho^*(v)$ over vertices $v$ in $G$. They approximate the local density of each vertex with no theoretical (asymptotic) guarantees. We provide an extensive study of this local density measure. Just as with (global) maximum subgraph density, we show that there is a dual relation between the local out-degrees and the minimum out-degree orientations of the graph. We introduce the definition of the local out-degree $g^*(v)$ of a vertex $v$, and show it to be equal to the local density $\rho^*(v)$. We consider the local out-degree to be conceptually simpler, shorter to define, and easier to compute. Using the local out-degree we show a previously unknown fact: that existing algorithms already dynamically approximate the local density. Next, we provide the first distributed algorithms that compute the local density with provable guarantees: given any $\varepsilon$ such that $\varepsilon^{-1} \in O(poly \, n)$, we show a deterministic distributed algorithm in the LOCAL model where, after $O(\varepsilon^{-2} \log^2 n)$ rounds, every vertex $v$ outputs a $(1 + \varepsilon)$-approximation of their local density $\rho^*(v)$. In CONGEST, we show a deterministic distributed algorithm that requires $\text{poly}(\log n,\varepsilon^{-1}) \cdot 2^{O(\sqrt{\log n})}$ rounds, which is sublinear in $n$. As a corollary, we obtain the first deterministic algorithm running in a sublinear number of rounds for $(1+\varepsilon)$-approximate densest subgraph detection in the CONGEST model.

Reconfiguration Using Generalized Token Jumping

from arXiv: Data Structures and Algorithms

Authors: Jan Matyáš Křišťan, Jakub Svoboda

In reconfiguration, we are given two solutions to a graph problem, such as Vertex Cover or Dominating Set, with each solu tion represented by a placement of tokens on vertices of the graph. Our task is to reconfigure one into the other using small steps while ensuring the intermediate configurations of tokens are also valid solutions. The two commonly studied settings are Token Jumping and Token Sliding, which allows moving a single token to an arbitrary or an adjacent vertex, respectively. We introduce new rules that generalize Token Jumping, parameterized by the number of tokens allowed to move at once and by the maximum distance of each move. Our main contribution is identifying minimal rules that allow reconfiguring any possible given solution into any other for Independent Set, Vertex Cover, and Dominating Set. For each minimal rule, we also provide an efficient algorithm that finds a corresponding reconfiguration sequence. We further focus on the rule that allows each token to move to an adjacent vertex in a single step. This natural variant turns out to be the minimal rule that guarantees reconfigurability for Vertex Cover. We determine the computational complexity of deciding whether a (shortest) reconfiguration sequence exists under this rule for the three studied problems. While reachability for Vertex Cover is shown to be in P, finding a shortest sequence is shown to be NP-complete. For Independent Set and Dominating Set, even reachability is shown to be PSPACE-complete.

Authors: Jan Matyáš Křišťan, Jakub Svoboda

In reconfiguration, we are given two solutions to a graph problem, such as Vertex Cover or Dominating Set, with each solu tion represented by a placement of tokens on vertices of the graph. Our task is to reconfigure one into the other using small steps while ensuring the intermediate configurations of tokens are also valid solutions. The two commonly studied settings are Token Jumping and Token Sliding, which allows moving a single token to an arbitrary or an adjacent vertex, respectively. We introduce new rules that generalize Token Jumping, parameterized by the number of tokens allowed to move at once and by the maximum distance of each move. Our main contribution is identifying minimal rules that allow reconfiguring any possible given solution into any other for Independent Set, Vertex Cover, and Dominating Set. For each minimal rule, we also provide an efficient algorithm that finds a corresponding reconfiguration sequence. We further focus on the rule that allows each token to move to an adjacent vertex in a single step. This natural variant turns out to be the minimal rule that guarantees reconfigurability for Vertex Cover. We determine the computational complexity of deciding whether a (shortest) reconfiguration sequence exists under this rule for the three studied problems. While reachability for Vertex Cover is shown to be in P, finding a shortest sequence is shown to be NP-complete. For Independent Set and Dominating Set, even reachability is shown to be PSPACE-complete.

Efficient terabyte-scale text compression via stable local consistency and parallel grammar processing

from arXiv: Data Structures and Algorithms

Authors: Diego Diaz-Dominguez

We present a highly parallelizable text compression algorithm that scales efficiently to terabyte-sized datasets. Our method builds on locally consistent grammars, a lightweight form of compression, combined with simple recompression techniques to achieve further space reductions. Locally consistent grammar algorithms are particularly suitable for scaling, as they need minimal satellite information to compact the text. We introduce a novel concept to enable parallelisation, stable local consistency. A grammar algorithm ALG is stable, if for any pattern $P$ occurring in a collection $\mathcal{T}=\{T_1, T_2, \ldots, T_k\}$, the instances $ALG(T_1), ALG(T_2), \ldots, ALG(T_k)$ independently produce cores for $P$ with the same topology. In a locally consistent grammar, the core of $P$ is a subset of nodes and edges in $\mathcal{T}$'s parse tree that remains the same in all the occurrences of $P$. This feature is important to achieve compression, but it only holds if ALG synchronises the parsing of the strings, for instance, by defining a common set of nonterminal symbols for them. Stability removes the need for synchronisation during the parsing phase. Consequently, we can run $ALG(T_1), ALG(T_2), \ldots, ALG(T_k)$ fully in parallel and then merge the resulting grammars into a single compressed output equivalent to $ALG(\mathcal{T})$. We implemented our ideas and tested them on massive datasets. Our results showed that our method could process a diverse collection of bacterial genomes (7.9 TB) in around nine hours, requiring 16 threads and 0.43 bits/symbol of working memory, producing a compressed representation 85 times smaller than the original input.

Authors: Diego Diaz-Dominguez

We present a highly parallelizable text compression algorithm that scales efficiently to terabyte-sized datasets. Our method builds on locally consistent grammars, a lightweight form of compression, combined with simple recompression techniques to achieve further space reductions. Locally consistent grammar algorithms are particularly suitable for scaling, as they need minimal satellite information to compact the text. We introduce a novel concept to enable parallelisation, stable local consistency. A grammar algorithm ALG is stable, if for any pattern $P$ occurring in a collection $\mathcal{T}=\{T_1, T_2, \ldots, T_k\}$, the instances $ALG(T_1), ALG(T_2), \ldots, ALG(T_k)$ independently produce cores for $P$ with the same topology. In a locally consistent grammar, the core of $P$ is a subset of nodes and edges in $\mathcal{T}$'s parse tree that remains the same in all the occurrences of $P$. This feature is important to achieve compression, but it only holds if ALG synchronises the parsing of the strings, for instance, by defining a common set of nonterminal symbols for them. Stability removes the need for synchronisation during the parsing phase. Consequently, we can run $ALG(T_1), ALG(T_2), \ldots, ALG(T_k)$ fully in parallel and then merge the resulting grammars into a single compressed output equivalent to $ALG(\mathcal{T})$. We implemented our ideas and tested them on massive datasets. Our results showed that our method could process a diverse collection of bacterial genomes (7.9 TB) in around nine hours, requiring 16 threads and 0.43 bits/symbol of working memory, producing a compressed representation 85 times smaller than the original input.

Dimension Reduction via Sum-of-Squares and Improved Clustering Algorithms for Non-Spherical Mixtures

from arXiv: Data Structures and Algorithms

Authors: Prashanti Anderson, Mitali Bafna, Rares-Darius Buhai, Pravesh K. Kothari, David Steurer

We develop a new approach for clustering non-spherical (i.e., arbitrary component covariances) Gaussian mixture models via a subroutine, based on the sum-of-squares method, that finds a low-dimensional separation-preserving projection of the input data. Our method gives a non-spherical analog of the classical dimension reduction, based on singular value decomposition, that forms a key component of the celebrated spherical clustering algorithm of Vempala and Wang [VW04] (in addition to several other applications). As applications, we obtain an algorithm to (1) cluster an arbitrary total-variation separated mixture of $k$ centered (i.e., zero-mean) Gaussians with $n\geq \operatorname{poly}(d) f(w_{\min}^{-1})$ samples and $\operatorname{poly}(n)$ time, and (2) cluster an arbitrary total-variation separated mixture of $k$ Gaussians with identical but arbitrary unknown covariance with $n \geq d^{O(\log w_{\min}^{-1})} f(w_{\min}^{-1})$ samples and $n^{O(\log w_{\min}^{-1})}$ time. Here, $w_{\min}$ is the minimum mixing weight of the input mixture, and $f$ does not depend on the dimension $d$. Our algorithms naturally extend to tolerating a dimension-independent fraction of arbitrary outliers. Before this work, the techniques in the state-of-the-art non-spherical clustering algorithms needed $d^{O(k)} f(w_{\min}^{-1})$ time and samples for clustering such mixtures. Our results may come as a surprise in the context of the $d^{\Omega(k)}$ statistical query lower bound [DKS17] for clustering non-spherical Gaussian mixtures. While this result is usually thought to rule out $d^{o(k)}$ cost algorithms for the problem, our results show that the lower bounds can in fact be circumvented for a remarkably general class of Gaussian mixtures.

Authors: Prashanti Anderson, Mitali Bafna, Rares-Darius Buhai, Pravesh K. Kothari, David Steurer

We develop a new approach for clustering non-spherical (i.e., arbitrary component covariances) Gaussian mixture models via a subroutine, based on the sum-of-squares method, that finds a low-dimensional separation-preserving projection of the input data. Our method gives a non-spherical analog of the classical dimension reduction, based on singular value decomposition, that forms a key component of the celebrated spherical clustering algorithm of Vempala and Wang [VW04] (in addition to several other applications). As applications, we obtain an algorithm to (1) cluster an arbitrary total-variation separated mixture of $k$ centered (i.e., zero-mean) Gaussians with $n\geq \operatorname{poly}(d) f(w_{\min}^{-1})$ samples and $\operatorname{poly}(n)$ time, and (2) cluster an arbitrary total-variation separated mixture of $k$ Gaussians with identical but arbitrary unknown covariance with $n \geq d^{O(\log w_{\min}^{-1})} f(w_{\min}^{-1})$ samples and $n^{O(\log w_{\min}^{-1})}$ time. Here, $w_{\min}$ is the minimum mixing weight of the input mixture, and $f$ does not depend on the dimension $d$. Our algorithms naturally extend to tolerating a dimension-independent fraction of arbitrary outliers. Before this work, the techniques in the state-of-the-art non-spherical clustering algorithms needed $d^{O(k)} f(w_{\min}^{-1})$ time and samples for clustering such mixtures. Our results may come as a surprise in the context of the $d^{\Omega(k)}$ statistical query lower bound [DKS17] for clustering non-spherical Gaussian mixtures. While this result is usually thought to rule out $d^{o(k)}$ cost algorithms for the problem, our results show that the lower bounds can in fact be circumvented for a remarkably general class of Gaussian mixtures.

Brief Announcement: Parallel Construction of Bumped Ribbon Retrieval

from arXiv: Data Structures and Algorithms

Authors: Matthias Becht, Hans-Peter Lehmann, Peter Sanders

A retrieval data structure stores a static function f : S -> {0,1}^r . For all x in S, it returns the r-bit value f(x), while for other inputs it may return an arbitrary result. The structure cannot answer membership queries, so it does not have to encode S. The information theoretic space lower bound for arbitrary inputs is r|S| bits. Retrieval data structures have widespread applications. They can be used as an approximate membership filter for S by storing fingerprints of the keys in S, where they are faster and more space efficient than Bloom filters. They can also be used as a basic building block of succinct data structures like perfect hash functions. Bumped Ribbon Retrieval (BuRR) [Dillinger et al., SEA'22] is a recently developed retrieval data structure that is fast to construct with a space overhead of less than 1%. The idea is to solve a nearly diagonal system of linear equations to determine a matrix that, multiplied with the hash of each key, gives the desired output values. During solving, BuRR might bump lines of the equation system to another layer of the same data structure. While the paper describes a simple parallel construction based on bumping the keys on thread boundaries, it does not give an implementation. In this brief announcement, we now fill this gap. Our parallel implementation is transparent to the queries. It achieves a speedup of 14 on 32 cores for 8-bit filters. The additional space overhead is 105 bytes per thread, or 105 slots. This matches 0.0007% of the total space consumption when constructing with 1 billion input keys. A large portion of the construction time is spent on parallel sorting.

Authors: Matthias Becht, Hans-Peter Lehmann, Peter Sanders

A retrieval data structure stores a static function f : S -> {0,1}^r . For all x in S, it returns the r-bit value f(x), while for other inputs it may return an arbitrary result. The structure cannot answer membership queries, so it does not have to encode S. The information theoretic space lower bound for arbitrary inputs is r|S| bits. Retrieval data structures have widespread applications. They can be used as an approximate membership filter for S by storing fingerprints of the keys in S, where they are faster and more space efficient than Bloom filters. They can also be used as a basic building block of succinct data structures like perfect hash functions. Bumped Ribbon Retrieval (BuRR) [Dillinger et al., SEA'22] is a recently developed retrieval data structure that is fast to construct with a space overhead of less than 1%. The idea is to solve a nearly diagonal system of linear equations to determine a matrix that, multiplied with the hash of each key, gives the desired output values. During solving, BuRR might bump lines of the equation system to another layer of the same data structure. While the paper describes a simple parallel construction based on bumping the keys on thread boundaries, it does not give an implementation. In this brief announcement, we now fill this gap. Our parallel implementation is transparent to the queries. It achieves a speedup of 14 on 32 cores for 8-bit filters. The additional space overhead is 105 bytes per thread, or 105 slots. This matches 0.0007% of the total space consumption when constructing with 1 billion input keys. A large portion of the construction time is spent on parallel sorting.

An Affine Equivalence Algorithm for S-boxes based on Matrix Invariants

from arXiv: Data Structures and Algorithms

Authors: Xincheng Hu, Xiao Zeng, Zhaoqiang Liu, Guowu Yang

We investigate the affine equivalence (AE) problem of S-boxes. Given two S-boxes denoted as $S_1$ and $S_2$, we aim to seek two invertible AE transformations $A,B$ such that $S_1\circ A = B\circ S_2$ holds. Due to important applications in the analysis and design of block ciphers, the investigation of AE algorithms has performed growing significance. In this paper, we propose zeroization on S-box firstly, and the AE problem can be transformed into $2^n$ linear equivalence problems by this zeroization operation. Secondly, we propose standard orthogonal spatial matrix (SOSM), and the rank of the SOSM is invariant under AE transformations. Finally, based on the zeroization operation and the SOSM method, we propose a depth first search (DFS) method for determining AE of S-boxes, named the AE\_SOSM\_DFS algorithm. Using this matrix invariant, we optimize the temporal complexity of the algorithm to approximately $\frac{1}{2^n}$ of the complexity without SOSM. Specifically, the complexity of our algorithm is $O(2^{3n})$. In addition, we also conducted experiments with non-invertible S-boxes, and the performance is similar to that of invertible S-boxes. Moreover, our proposed algorithm can effectively handle S-boxes with low algebraic degree or certain popular S-boxes such as namely AES and ARIA\_s2, which are difficult to be handled by the algorithm proposed by Dinur (2018). Using our algorithm, it only takes 5.5 seconds to find out that the seven popular S-boxes namely AES, ARIA\_s2, Camellia, Chiasmus, DBlock, SEED\_S0, and SMS4 are affine equivalent and the AE transformations of these S-boxes are provided.

Authors: Xincheng Hu, Xiao Zeng, Zhaoqiang Liu, Guowu Yang

We investigate the affine equivalence (AE) problem of S-boxes. Given two S-boxes denoted as $S_1$ and $S_2$, we aim to seek two invertible AE transformations $A,B$ such that $S_1\circ A = B\circ S_2$ holds. Due to important applications in the analysis and design of block ciphers, the investigation of AE algorithms has performed growing significance. In this paper, we propose zeroization on S-box firstly, and the AE problem can be transformed into $2^n$ linear equivalence problems by this zeroization operation. Secondly, we propose standard orthogonal spatial matrix (SOSM), and the rank of the SOSM is invariant under AE transformations. Finally, based on the zeroization operation and the SOSM method, we propose a depth first search (DFS) method for determining AE of S-boxes, named the AE\_SOSM\_DFS algorithm. Using this matrix invariant, we optimize the temporal complexity of the algorithm to approximately $\frac{1}{2^n}$ of the complexity without SOSM. Specifically, the complexity of our algorithm is $O(2^{3n})$. In addition, we also conducted experiments with non-invertible S-boxes, and the performance is similar to that of invertible S-boxes. Moreover, our proposed algorithm can effectively handle S-boxes with low algebraic degree or certain popular S-boxes such as namely AES and ARIA\_s2, which are difficult to be handled by the algorithm proposed by Dinur (2018). Using our algorithm, it only takes 5.5 seconds to find out that the seven popular S-boxes namely AES, ARIA\_s2, Camellia, Chiasmus, DBlock, SEED\_S0, and SMS4 are affine equivalent and the AE transformations of these S-boxes are provided.

Extending the Burrows-Wheeler Transform for Cartesian Tree Matching and Constructing It

from arXiv: Data Structures and Algorithms

Authors: Eric M. Osterkamp, Dominik Köppl

Cartesian tree matching is a form of generalized pattern matching where a substring of the text matches with the pattern if they share the same Cartesian tree. This form of matching finds application for time series of stock prices and can be of interest for melody matching between musical scores. For the indexing problem, the state-of-the-art data structure is a Burrows-Wheeler transform based solution due to [Kim and Cho, CPM'21], which uses nearly succinct space and can count the number of substrings that Cartesian tree match with a pattern in time linear in the pattern length. The authors address the construction of their data structure with a straight-forward solution that, however, requires pointer-based data structures, which asymptotically need more space than compact solutions [Kim and Cho, CPM'21, Section A.4]. We address this bottleneck by a construction that requires compact space and has a time complexity linear in the product of the text length with some logarithmic terms. Additionally, we can extend this index for indexing multiple circular texts in the spirit of the extended Burrows-Wheeler transform without sacrificing the time and space complexities. We present this index in a dynamic variant, where we pay a logarithmic slowdown and need compact space for the extra functionality that we can incrementally add texts. Our extended setting is of interest for finding repetitive motifs common in the aforementioned applications, independent of offsets and scaling.

Authors: Eric M. Osterkamp, Dominik Köppl

Cartesian tree matching is a form of generalized pattern matching where a substring of the text matches with the pattern if they share the same Cartesian tree. This form of matching finds application for time series of stock prices and can be of interest for melody matching between musical scores. For the indexing problem, the state-of-the-art data structure is a Burrows-Wheeler transform based solution due to [Kim and Cho, CPM'21], which uses nearly succinct space and can count the number of substrings that Cartesian tree match with a pattern in time linear in the pattern length. The authors address the construction of their data structure with a straight-forward solution that, however, requires pointer-based data structures, which asymptotically need more space than compact solutions [Kim and Cho, CPM'21, Section A.4]. We address this bottleneck by a construction that requires compact space and has a time complexity linear in the product of the text length with some logarithmic terms. Additionally, we can extend this index for indexing multiple circular texts in the spirit of the extended Burrows-Wheeler transform without sacrificing the time and space complexities. We present this index in a dynamic variant, where we pay a logarithmic slowdown and need compact space for the extra functionality that we can incrementally add texts. Our extended setting is of interest for finding repetitive motifs common in the aforementioned applications, independent of offsets and scaling.

Space-Efficient Online Computation of String Net Occurrences

from arXiv: Data Structures and Algorithms

Authors: Takuya Mieno, Shunsuke Inenaga

A substring $u$ of a string $T$ is said to be a repeat if $u$ occurs at least twice in $T$. An occurrence $[i..j]$ of a repeat $u$ in $T$ is said to be a net occurrence if each of the substrings $aub = T[i-1..j+1]$, $au = T[i-1..j+1]$, and $ub = T[i..j+1]$ occurs exactly once in $T$. The occurrence $[i-1..j+1]$ of $aub$ is said to be an extended net occurrence of $u$. Let $T$ be an input string of length $n$ over an alphabet of size $\sigma$, and let $\mathsf{ENO}(T)$ denote the set of extended net occurrences of repeats in $T$. Guo et al. [SPIRE 2024] presented an online algorithm which can report $\mathsf{ENO}(T[1..i])$ in $T[1..i]$ in $O(n\sigma^2)$ time, for each prefix $T[1..i]$ of $T$. Very recently, Inenaga [arXiv 2024] gave a faster online algorithm that can report $\mathsf{ENO}(T[1..i])$ in optimal $O(\#\mathsf{ENO}(T[1..i]))$ time for each prefix $T[1..i]$ of $T$, where $\#S$ denotes the cardinality of a set $S$. Both of the aforementioned data structures can be maintained in $O(n \log \sigma)$ time and occupy $O(n)$ space, where the $O(n)$-space requirement comes from the suffix tree data structure. In this paper, we propose the two following space-efficient alternatives: (1) A sliding-window algorithm of $O(d)$ working space that can report $\mathsf{ENO}(T[i-d+1..i])$ in optimal $O(\#\mathsf{ENO}(T[i-d+1..i]))$ time for each sliding window $T[i-d+1..i]$ of size $d$ in $T$. (2) A CDAWG-based online algorithm of $O(e)$ working space that can report $\mathsf{ENO}(T[1..i])$ in optimal $O(\#\mathsf{ENO}(T[1..i]))$ time for each prefix $T[1..i]$ of $T$, where $e < 2n$ is the number of edges in the CDAWG for $T$. All of our proposed data structures can be maintained in $O(n \log \sigma)$ time for the input online string $T$. We also discuss that the extended net occurrences of repeats in $T$ can be fully characterized in terms of the minimal unique substrings (MUSs) in $T$.

Authors: Takuya Mieno, Shunsuke Inenaga

A substring $u$ of a string $T$ is said to be a repeat if $u$ occurs at least twice in $T$. An occurrence $[i..j]$ of a repeat $u$ in $T$ is said to be a net occurrence if each of the substrings $aub = T[i-1..j+1]$, $au = T[i-1..j+1]$, and $ub = T[i..j+1]$ occurs exactly once in $T$. The occurrence $[i-1..j+1]$ of $aub$ is said to be an extended net occurrence of $u$. Let $T$ be an input string of length $n$ over an alphabet of size $\sigma$, and let $\mathsf{ENO}(T)$ denote the set of extended net occurrences of repeats in $T$. Guo et al. [SPIRE 2024] presented an online algorithm which can report $\mathsf{ENO}(T[1..i])$ in $T[1..i]$ in $O(n\sigma^2)$ time, for each prefix $T[1..i]$ of $T$. Very recently, Inenaga [arXiv 2024] gave a faster online algorithm that can report $\mathsf{ENO}(T[1..i])$ in optimal $O(\#\mathsf{ENO}(T[1..i]))$ time for each prefix $T[1..i]$ of $T$, where $\#S$ denotes the cardinality of a set $S$. Both of the aforementioned data structures can be maintained in $O(n \log \sigma)$ time and occupy $O(n)$ space, where the $O(n)$-space requirement comes from the suffix tree data structure. In this paper, we propose the two following space-efficient alternatives: (1) A sliding-window algorithm of $O(d)$ working space that can report $\mathsf{ENO}(T[i-d+1..i])$ in optimal $O(\#\mathsf{ENO}(T[i-d+1..i]))$ time for each sliding window $T[i-d+1..i]$ of size $d$ in $T$. (2) A CDAWG-based online algorithm of $O(e)$ working space that can report $\mathsf{ENO}(T[1..i])$ in optimal $O(\#\mathsf{ENO}(T[1..i]))$ time for each prefix $T[1..i]$ of $T$, where $e < 2n$ is the number of edges in the CDAWG for $T$. All of our proposed data structures can be maintained in $O(n \log \sigma)$ time for the input online string $T$. We also discuss that the extended net occurrences of repeats in $T$ can be fully characterized in terms of the minimal unique substrings (MUSs) in $T$.

Matroid Secretary via Labeling Schemes

from arXiv: Data Structures and Algorithms

Authors: Kristóf Bérczi, Vasilis Livanos, José Soto, Victor Verdugo

The Matroid Secretary Problem (MSP) is one of the most prominent settings for online resource allocation and optimal stopping. A decision-maker is presented with a ground set of elements $E$ revealed sequentially and in random order. Upon arrival, an irrevocable decision is made in a take-it-or-leave-it fashion, subject to a feasibility constraint on the set of selected elements captured by a matroid defined over $E$. The decision-maker only has ordinal access to compare the elements, and the goal is to design an algorithm that selects every element of the optimal basis with probability at least $\alpha$ (i.e., $\alpha$-probability-competitive). While the existence of a constant probability-competitive algorithm for MSP remains a major open question, simple greedy policies are at the core of state-of-the-art algorithms for several matroid classes. We introduce a flexible and general algorithmic framework to analyze greedy-like algorithms for MSP based on constructing a language associated with the matroid. Using this language, we establish a lower bound on the probability-competitiveness of the algorithm by studying a corresponding Poisson point process that governs the words' distribution in the language. Using our framework, we break the state-of-the-art guarantee for laminar matroids by settling the probability-competitiveness of the greedy-improving algorithm to be exactly $1-\ln(2) \approx 0.3068$. For graphic matroids, we show a probability-competitiveness of $0.2693$ when the underlying graph has no parallel edges and a guarantee of $0.2504$ for general graphs, also breaking the state-of-the-art factor of $0.25$.

Authors: Kristóf Bérczi, Vasilis Livanos, José Soto, Victor Verdugo

The Matroid Secretary Problem (MSP) is one of the most prominent settings for online resource allocation and optimal stopping. A decision-maker is presented with a ground set of elements $E$ revealed sequentially and in random order. Upon arrival, an irrevocable decision is made in a take-it-or-leave-it fashion, subject to a feasibility constraint on the set of selected elements captured by a matroid defined over $E$. The decision-maker only has ordinal access to compare the elements, and the goal is to design an algorithm that selects every element of the optimal basis with probability at least $\alpha$ (i.e., $\alpha$-probability-competitive). While the existence of a constant probability-competitive algorithm for MSP remains a major open question, simple greedy policies are at the core of state-of-the-art algorithms for several matroid classes. We introduce a flexible and general algorithmic framework to analyze greedy-like algorithms for MSP based on constructing a language associated with the matroid. Using this language, we establish a lower bound on the probability-competitiveness of the algorithm by studying a corresponding Poisson point process that governs the words' distribution in the language. Using our framework, we break the state-of-the-art guarantee for laminar matroids by settling the probability-competitiveness of the greedy-improving algorithm to be exactly $1-\ln(2) \approx 0.3068$. For graphic matroids, we show a probability-competitiveness of $0.2693$ when the underlying graph has no parallel edges and a guarantee of $0.2504$ for general graphs, also breaking the state-of-the-art factor of $0.25$.

Parsing Millions of DNS Records per Second

from arXiv: Data Structures and Algorithms

Authors: Jeroen Koekkoek, Daniel Lemire

The Domain Name System (DNS) plays a critical role in the functioning of the Internet. It provides a hierarchical name space for locating resources. Data is typically stored in plain text files, possibly spanning gigabytes. Frequent parsing of these files to refresh the data is computationally expensive: processing a zone file can take minutes. We propose a novel approach called simdzone to enhance DNS parsing throughput. We use data parallelism, specifically the Single Instruction Multiple Data (SIMD) instructions available on commodity processors. We show that we can multiply the parsing speed compared to state-of-the-art parsers found in Knot DNS and the NLnet Labs Name Server Daemon (NSD). The resulting software library replaced the parser in NSD.

Authors: Jeroen Koekkoek, Daniel Lemire

The Domain Name System (DNS) plays a critical role in the functioning of the Internet. It provides a hierarchical name space for locating resources. Data is typically stored in plain text files, possibly spanning gigabytes. Frequent parsing of these files to refresh the data is computationally expensive: processing a zone file can take minutes. We propose a novel approach called simdzone to enhance DNS parsing throughput. We use data parallelism, specifically the Single Instruction Multiple Data (SIMD) instructions available on commodity processors. We show that we can multiply the parsing speed compared to state-of-the-art parsers found in Knot DNS and the NLnet Labs Name Server Daemon (NSD). The resulting software library replaced the parser in NSD.

Tuesday, November 19

Assistant Professor at University of Memphis (apply by December 2, 2024)

from CCI: jobs

The Department of Computer Science at the University of Memphis is seeking candidates for Assistant Professor position(s) beginning Fall 2025. Qualified candidates in all areas of computer science are invited, while candidates with core expertise in robotics AI, digital twins, software engineering, theory/algorithms, and cybersecurity are particularly encouraged to apply. Website: workforum.memphis.edu/postings/42509 Email: cconnor2@memphis.edu

The Department of Computer Science at the University of Memphis is seeking candidates for Assistant Professor position(s) beginning Fall 2025. Qualified candidates in all areas of computer science are invited, while candidates with core expertise in robotics AI, digital twins, software engineering, theory/algorithms, and cybersecurity are particularly encouraged to apply.

Website: https://workforum.memphis.edu/postings/42509
Email: cconnor2@memphis.edu

By shacharlovett

The Optimal Part of Control

from Ben Recht

A mad dash through the rudiments of predictive control.

This is the live blog of Lecture 22 of my graduate class “Convex Optimization.” A Table of Contents is here.

Given their mutual background in control and dynamical systems, It’s curious that Boyd and Vandenberghe have minimal coverage of control theory in their book. There are control applications scattered in the exercises but not central coverage. Optimal control gets two pages in Chapter 10, and it’s mostly to highlight the banded structure of the associated KKT system.

It’s particularly curious because one of the most touted cvx success stories is the use of cvxgen in the control of Space X rockets. Written by Boyd’s advisee Jacob Mattingley, cvxgen generates fast code to solve disciplined quadratic programming. Beyond cvxgen, control engineers remain some of the biggest consumers of convex optimization methods. The first half of Predictive Control for Linear and Hybrid Systems by  Borrelli, Bemporad, and Morari covers optimization methods.

Moreover, in the 20 years since the publication of their book, there’s been a renewed interest in control and its applications as people have pushed the limits of reinforcement learning. A little taste of control in an optimization class can help contextualize some of the jargon and methods used in modern “learning robotic” systems.

It’s worth a week exploring why optimization is central to modern control systems. Fortunately, Boyd has taught control theory before, and his course materials are fantastic. One of these, EE 363, has excellent lecture notes.  I’ll go through the first couple of lectures this week, leaning on the connection to optimization. I’ve also written about control extensively on this blog. It’s a favorite topic of mine, and one I’m always happy to teach. 

Now, what can we do in two lectures? That’s a fun pedagogical question, and we shall see how it goes this week.

The optimal control problem concerns steering a system’s configuration over time by applying appropriate inputs. At every time, we can record a list of measurements of the system, called outputs and the set of inputs we are currently executing to steer the system. The cost of the optimization problem is a function of the time history, or trajectory, of the inputs and outputs. 

For example, we might want a vehicle to follow a path with a particular velocity using minimal power consumption. The measurements here are the position and velocity, and the input is the associated power consumption. The objective will measure deviation from the trajectory and the total amount of power consumption.

Another canonical example comes from supply chains. Here, the measured outputs are the amount of a product available on shelves today. The objective function concerns how much revenue a store makes for a particular product, how much it costs to hold an amount of supply in stock, how much the store loses due to opportunity cost if they understock, and how much it costs to resupply. The outputs here are the amounts of products in stock. The inputs are the daily amounts of restocking. Additionally, the purchases are uncertain inputs out of the control of the store. These must be modeled and forecast.

Control problems are constrained by dynamics. The system under control has a set of internal states that govern its behavior. The dynamical model of the system asserts that the next state is a function only of the current state and current input. An optimal control problem aims to minimize the cost of a trajectory subject to the dynamical laws relating inputs and states.

Dynamical laws are equality constraints. This class has taught us that the only equality constraints that generically result in convex constraint sets are linear. Hence, we should not be surprised that the most common dynamical models are linear as well. They assert that the future state is a linear function of the current state and input. Not all dynamical systems are linear, of course, but the supply chain example I listed above is. Newton’s Laws are also linear. Many nonlinear problems, including those in vehicle dynamics problems, can be well approximated by linear dynamics in reasonable operating regimes. 

For the cost function, a common modeling constraint is that the objective decomposes into a sum of costs for every time step of the trajectory. This is not a necessary assumption for convexity, but this assumption guarantees that the optimal control problem can be solved in time scaling linearly with the length of the trajectory. Without a separable cost function, the optimal control problem may require cubic time in this length, and such algorithmic scaling quickly becomes prohibitive.

Algorithmic concerns tend to tie our hands in control modeling. For the sake of computational efficiency, our texts advise focusing on linear systems and separable costs. These modeling restrictions provide a crutch to make optimization algorithms work, but you can land giant rockets with such structured optimization problems. These constraints are more freeing than they appear.

There’s another reason why we can get away with linear models in control problems: feedback. While you can plan over a long trajectory, modeling errors and unforeseen disturbances in the environment can quickly ruin your plans. The solution in modern controls is constant replanning. You use the optimal control problem to make an optimal plan subject to your forecast of what might happen given your model of dynamics and disturbances. You take a single step and then see what actually happens with the unforeseen disturbances and the states. You replan from there. This constant replanning, called Model Predictive Control, helps you correct for unanticipated behaviors. The art of design in model predictive control is understanding what you should forecast and for how long in order to engineer good performance. But the mechanics of model predictive control follow immediately from what we’ve studied in our semester of convex optimization.

Subscribe now

By Ben Recht

TR24-182 | Maximum Circuit Lower Bounds for Exponential-time Arthur Merlin | Jiatu Li, Lijie Chen, Jingxun Liang

from ECCC Papers

We show that the complexity class of exponential-time Arthur Merlin with sub-exponential advice ($AMEXP_{/2^{n^{\varepsilon}}}$) requires circuit complexity at least $2^n/n$. Previously, the best known such near-maximum lower bounds were for symmetric exponential time by Chen, Hirahara, and Ren (STOC'24) and Li (STOC'24), or randomized exponential time with MCSP oracle and sub-exponential advice by Hirahara, Lu, and Ren (CCC'23). Our result is proved by combining the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23) together with the uniform hardness-vs-randomness connection for Arthur-Merlin protocols by Shaltiel-Umans (STOC'07) and van Melkebeek-Sdroievski (CCC'23). We also provide a conceptually different proof using a novel "critical win-win" argument that extends a technique of Lu, Oliveira, and Santhanam (STOC'21). Indeed, our circuit lower bound is a corollary of a new explicit construction for properties in $coAM$. We show that for every dense property $P \in coAM$, there is a quasi-polynomial-time Arthur-Merlin protocol with short advice such that the following holds for infinitely many $n$: There exists a canonical string $w_n \in P \cap \{0,1\}^n$ so that (1) there is a strategy of Merlin such that Arthur outputs $w_n$ with probability $1$ and (2) for any strategy of Merlin, with probability $2/3$, Arthur outputs either $w_n$ or a failure symbol $\bot$. As a direct consequence of this new explicit construction, our circuit lower bound also generalizes to circuits with an $AM \cap coAM$ oracle. To our knowledge, this is the first unconditional lower bound against a strong non-uniform class using a hard language that is only "quantitatively harder".

We show that the complexity class of exponential-time Arthur Merlin with sub-exponential advice ($AMEXP_{/2^{n^{\varepsilon}}}$) requires circuit complexity at least $2^n/n$. Previously, the best known such near-maximum lower bounds were for symmetric exponential time by Chen, Hirahara, and Ren (STOC'24) and Li (STOC'24), or randomized exponential time with MCSP oracle and sub-exponential advice by Hirahara, Lu, and Ren (CCC'23). Our result is proved by combining the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23) together with the uniform hardness-vs-randomness connection for Arthur-Merlin protocols by Shaltiel-Umans (STOC'07) and van Melkebeek-Sdroievski (CCC'23). We also provide a conceptually different proof using a novel "critical win-win" argument that extends a technique of Lu, Oliveira, and Santhanam (STOC'21). Indeed, our circuit lower bound is a corollary of a new explicit construction for properties in $coAM$. We show that for every dense property $P \in coAM$, there is a quasi-polynomial-time Arthur-Merlin protocol with short advice such that the following holds for infinitely many $n$: There exists a canonical string $w_n \in P \cap \{0,1\}^n$ so that (1) there is a strategy of Merlin such that Arthur outputs $w_n$ with probability $1$ and (2) for any strategy of Merlin, with probability $2/3$, Arthur outputs either $w_n$ or a failure symbol $\bot$. As a direct consequence of this new explicit construction, our circuit lower bound also generalizes to circuits with an $AM \cap coAM$ oracle. To our knowledge, this is the first unconditional lower bound against a strong non-uniform class using a hard language that is only "quantitatively harder".

TR24-181 | Zero-Knowledge in Streaming Interactive Proofs | Tomer Gewirtzman, Ron Rothblum

from ECCC Papers

In a recent work, Cormode, Dall'Agnol, Gur and Hickey (CCC, 2024) introduced the model of Zero-Knowledge Streaming Interactive Proofs (zkSIPs). Loosely speaking, such proof-systems enable a prover to convince astreaming verifier that the input $x$, to which it has read-once streaming access, satisfies some property, in such a way that nothing beyond the correctness of the claim is revealed. Cormode et al. also gave constructions of zkSIPs to some specific and notable problems of interest. In this work, we advance the study of zero-knowledge proofs in the streaming model, by presenting protocols that are significantly more general and more secure. We use a definition of zero-knowledge that is a variation of that used by Cormode et al., which we find more appealing but is technically incomparable. Our main result is a zkSIP for any NP relation, that can be decided by low-depth polynomial-size circuits. We emphasize that this is the first general purpose protocol in this model, which captures, as a special case, the problems considered by the prior work. We also construct a specialized protocol for the ``polynomial evaluation'' problem considered in that work, with improved parameters. The protocols constructed by Cormode et al. have an inverse polylogarithmic simulation error (i.e., a gap with which a bounded-space distingiusher can distinguish the simulation from a real execution). This means that their protocols are entirely insecure if run multiple times (say on different inputs). In contrast, our protocols achieve a negligible zero-knowledge error, a stronger and far more robust security guarantee.

In a recent work, Cormode, Dall'Agnol, Gur and Hickey (CCC, 2024) introduced the model of Zero-Knowledge Streaming Interactive Proofs (zkSIPs). Loosely speaking, such proof-systems enable a prover to convince astreaming verifier that the input $x$, to which it has read-once streaming access, satisfies some property, in such a way that nothing beyond the correctness of the claim is revealed. Cormode et al. also gave constructions of zkSIPs to some specific and notable problems of interest. In this work, we advance the study of zero-knowledge proofs in the streaming model, by presenting protocols that are significantly more general and more secure. We use a definition of zero-knowledge that is a variation of that used by Cormode et al., which we find more appealing but is technically incomparable. Our main result is a zkSIP for any NP relation, that can be decided by low-depth polynomial-size circuits. We emphasize that this is the first general purpose protocol in this model, which captures, as a special case, the problems considered by the prior work. We also construct a specialized protocol for the ``polynomial evaluation'' problem considered in that work, with improved parameters. The protocols constructed by Cormode et al. have an inverse polylogarithmic simulation error (i.e., a gap with which a bounded-space distingiusher can distinguish the simulation from a real execution). This means that their protocols are entirely insecure if run multiple times (say on different inputs). In contrast, our protocols achieve a negligible zero-knowledge error, a stronger and far more robust security guarantee.

TR24-180 | Locally Sampleable Uniform Symmetric Distributions | Daniel Kane, Anthony Ostuni, Kewen Wu

from ECCC Papers

We characterize the power of constant-depth Boolean circuits in generating uniform symmetric distributions. Let $f\colon\{0,1\}^m\to\{0,1\}^n$ be a Boolean function where each output bit of $f$ depends only on $O(1)$ input bits. Assume the output distribution of $f$ on uniform input bits is close to a uniform distribution $\mathcal D$ with a symmetric support. We show that $\mathcal D$ is essentially one of the following six possibilities: (1) point distribution on $0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4) uniform over strings with even Hamming weights, (5) uniform over strings with odd Hamming weights, and (6) uniform over all strings. This confirms a conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023).

We characterize the power of constant-depth Boolean circuits in generating uniform symmetric distributions. Let $f\colon\{0,1\}^m\to\{0,1\}^n$ be a Boolean function where each output bit of $f$ depends only on $O(1)$ input bits. Assume the output distribution of $f$ on uniform input bits is close to a uniform distribution $\mathcal D$ with a symmetric support. We show that $\mathcal D$ is essentially one of the following six possibilities: (1) point distribution on $0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4) uniform over strings with even Hamming weights, (5) uniform over strings with odd Hamming weights, and (6) uniform over all strings. This confirms a conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023).

On the hardness of cloning and connections to representation theory

from arXiv: Computational Complexity

Authors: Vojtěch Havlíček, Chinmay Nirkhe

The states accepted by a quantum circuit are known as the witnesses for the quantum circuit's satisfiability. The assumption BQP does not equal QMA implies that no efficient algorithm exists for constructing a witness for a quantum circuit from the circuit's classical description. However, a similar complexity-theoretic lower bound on the computational hardness of cloning a witness is not known. In this note, we derive a conjecture about cloning algorithms for maximally entangled states over hidden subspaces which would imply that no efficient algorithm exists for cloning witnesses (assuming BQP does not contain NP). The conjecture and result follow from connections between quantum computation and representation theory; specifically, the relationship between quantum state complexity and the complexity of computing Kronecker coefficients.

Authors: Vojtěch Havlíček, Chinmay Nirkhe

The states accepted by a quantum circuit are known as the witnesses for the quantum circuit's satisfiability. The assumption BQP does not equal QMA implies that no efficient algorithm exists for constructing a witness for a quantum circuit from the circuit's classical description. However, a similar complexity-theoretic lower bound on the computational hardness of cloning a witness is not known. In this note, we derive a conjecture about cloning algorithms for maximally entangled states over hidden subspaces which would imply that no efficient algorithm exists for cloning witnesses (assuming BQP does not contain NP). The conjecture and result follow from connections between quantum computation and representation theory; specifically, the relationship between quantum state complexity and the complexity of computing Kronecker coefficients.

Improved PIR Schemes using Matching Vectors and Derivatives

from arXiv: Computational Complexity

Authors: Fatemeh Ghasemi, Swastik Kopparty, Madhu Sudan

In this paper, we construct new t-server Private Information Retrieval (PIR) schemes with communication complexity subpolynomial in the previously best known, for all but finitely many t. Our results are based on combining derivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based PIRs of Yekhanin and Efremenko. Previously such a combination was achieved in an ingenious way by Dvir and Gopi, using polynomials and derivatives over certain exotic rings, en route to their fundamental result giving the first 2-server PIR with subpolynomial communication. Our improved PIRs are based on two ingredients: - We develop a new and direct approach to combine derivatives with Matching Vector based PIRs. This approach is much simpler than that of Dvir-Gopi: it works over the same field as the original PIRs, and only uses elementary properties of polynomials and derivatives. - A key subproblem that arises in the above approach is a higher-order polynomial interpolation problem. We show how "sparse S-decoding polynomials", a powerful tool from the original constructions of Matching Vector PIRs, can be used to solve this higher-order polynomial interpolation problem using surprisingly few higer-order evaluations. Using the known sparse S-decoding polynomials, in combination with our ideas leads to our improved PIRs. Notably, we get a 3-server PIR scheme with communication $2^{O^{\sim}( (\log n)^{1/3}) }$, improving upon the previously best known communication of $2^{O^{\sim}( \sqrt{\log n})}$ due to Efremenko.

Authors: Fatemeh Ghasemi, Swastik Kopparty, Madhu Sudan

In this paper, we construct new t-server Private Information Retrieval (PIR) schemes with communication complexity subpolynomial in the previously best known, for all but finitely many t. Our results are based on combining derivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based PIRs of Yekhanin and Efremenko. Previously such a combination was achieved in an ingenious way by Dvir and Gopi, using polynomials and derivatives over certain exotic rings, en route to their fundamental result giving the first 2-server PIR with subpolynomial communication. Our improved PIRs are based on two ingredients: - We develop a new and direct approach to combine derivatives with Matching Vector based PIRs. This approach is much simpler than that of Dvir-Gopi: it works over the same field as the original PIRs, and only uses elementary properties of polynomials and derivatives. - A key subproblem that arises in the above approach is a higher-order polynomial interpolation problem. We show how "sparse S-decoding polynomials", a powerful tool from the original constructions of Matching Vector PIRs, can be used to solve this higher-order polynomial interpolation problem using surprisingly few higer-order evaluations. Using the known sparse S-decoding polynomials, in combination with our ideas leads to our improved PIRs. Notably, we get a 3-server PIR scheme with communication $2^{O^{\sim}( (\log n)^{1/3}) }$, improving upon the previously best known communication of $2^{O^{\sim}( \sqrt{\log n})}$ due to Efremenko.

Gadgetless Lifting Beats Round Elimination: Improved Lower Bounds for Pointer Chasing

from arXiv: Computational Complexity

Authors: Xinyu Mao, Guangxu Yang, Jiapeng Zhang

We prove an \Omega(n/k+k) communication lower bound on (k-1)-round distributional complexity of the k-step pointer chasing problem under uniform input distribution, improving the \Omega(n/k - k log n) lower bound due to Yehudayoff (Combinatorics Probability and Computing, 2020). Our lower bound almost matches the upper bound of O(n/k + k) communication by Nisan and Wigderson (STOC 91). As part of our approach, we put forth gadgetless lifting, a new framework that lifts lower bounds for a family of restricted protocols into lower bounds for general protocols. A key step in gadgetless lifting is choosing the appropriate definition of restricted protocols. In this paper, our definition of restricted protocols is inspired by the structure-vs-pseudorandomness decomposition by G\"o\"os, Pitassi, and Watson (FOCS 17) and Yang and Zhang (STOC 24). Previously, round-communication trade-offs were mainly obtained by round elimination and information complexity. Both methods have some barriers in some situations, and we believe gadgetless lifting could potentially address these barriers.

Authors: Xinyu Mao, Guangxu Yang, Jiapeng Zhang

We prove an \Omega(n/k+k) communication lower bound on (k-1)-round distributional complexity of the k-step pointer chasing problem under uniform input distribution, improving the \Omega(n/k - k log n) lower bound due to Yehudayoff (Combinatorics Probability and Computing, 2020). Our lower bound almost matches the upper bound of O(n/k + k) communication by Nisan and Wigderson (STOC 91). As part of our approach, we put forth gadgetless lifting, a new framework that lifts lower bounds for a family of restricted protocols into lower bounds for general protocols. A key step in gadgetless lifting is choosing the appropriate definition of restricted protocols. In this paper, our definition of restricted protocols is inspired by the structure-vs-pseudorandomness decomposition by G\"o\"os, Pitassi, and Watson (FOCS 17) and Yang and Zhang (STOC 24). Previously, round-communication trade-offs were mainly obtained by round elimination and information complexity. Both methods have some barriers in some situations, and we believe gadgetless lifting could potentially address these barriers.

Hereditary First-Order Model Checking

from arXiv: Computational Complexity

Authors: Manuel Bodirsky, Santiago Guzmán-Pro

Many computational problems can be modelled as the class of all finite relational structures $\mathbb A$ that satisfy a fixed first-order sentence $\phi$ hereditarily, i.e., we require that every substructure of $\mathbb A$ satisfies $\phi$. In this case, we say that the class is in HerFO. The problems in HerFO are always in coNP, and sometimes coNP-complete. HerFO also contains many interesting computational problems in P, including many constraint satisfaction problems (CSPs). We show that HerFO captures the class of complements of CSPs for reducts of finitely bounded structures, i.e., every such CSP is polynomial-time equivalent to the complement of a problem in HerFO. However, we also prove that HerFO does not have the full computational power of coNP: there are problems in coNP that are not polynomial-time equivalent to a problem in HerFO, unless E=NE. Another main result is a description of the quantifier-prefixes for $\phi$ such that hereditarily checking $\phi$ is in P; we show that for every other quantifier-prefix there exists a formula $\phi$ with this prefix such that hereditarily checking $\phi$ is coNP-complete.

Authors: Manuel Bodirsky, Santiago Guzmán-Pro

Many computational problems can be modelled as the class of all finite relational structures $\mathbb A$ that satisfy a fixed first-order sentence $\phi$ hereditarily, i.e., we require that every substructure of $\mathbb A$ satisfies $\phi$. In this case, we say that the class is in HerFO. The problems in HerFO are always in coNP, and sometimes coNP-complete. HerFO also contains many interesting computational problems in P, including many constraint satisfaction problems (CSPs). We show that HerFO captures the class of complements of CSPs for reducts of finitely bounded structures, i.e., every such CSP is polynomial-time equivalent to the complement of a problem in HerFO. However, we also prove that HerFO does not have the full computational power of coNP: there are problems in coNP that are not polynomial-time equivalent to a problem in HerFO, unless E=NE. Another main result is a description of the quantifier-prefixes for $\phi$ such that hereditarily checking $\phi$ is in P; we show that for every other quantifier-prefix there exists a formula $\phi$ with this prefix such that hereditarily checking $\phi$ is coNP-complete.

Computing Conforming Partitions with Low Stabbing Number for Rectilinear Polygons

from arXiv: Computational Geometry

Authors: Therese Biedl, Stephane Durocher, Debajyoti Mondal, Rahnuma Islam Nishat, Bastien Rivier

A \emph{conforming partition} of a rectilinear $ n $-gon\bastien{I change from ``a polygon'', otherwise $ n $ is not defined.} $ P $ is a partition of $ P $ into rectangles without using Steiner points (i.e., all corners of all rectangles must lie on\bastien{Maybe add: the boundary of} $ P $). The stabbing number of such a partition is the maximum number of rectangles intersected by an axis-aligned segment lying in the interior of $ P $. In this paper, we examine the problem of computing conforming partitions with low stabbing number. We show that computing a conforming partition with stabbing number at most~$ 4 $ is $ NP $-hard, which strengthens a previously known hardness result [Durocher \& Mehrabi, Theor. Comput. Sci. 689: 157-168 (2017)] and eliminates the possibility for fixed-parameter-tractable algorithms parameterized by the stabbing number unless $ P = NP $. In contrast, we give (i) an $ O ( n \log n ) $-time\bastien{Reviewer request: changed from "linearithmic".} algorithm to decide whether a conforming partition with stabbing number~$ 2 $ exists, (ii) a fixed-parameter-tractable algorithm parameterized by both the stabbing number and treewidth of the pixelation of the polygon, and (iii) a fixed-parameter-tractable algorithm parameterized by the stabbing number for simple polygons in general position.

Authors: Therese Biedl, Stephane Durocher, Debajyoti Mondal, Rahnuma Islam Nishat, Bastien Rivier

A \emph{conforming partition} of a rectilinear $ n $-gon\bastien{I change from ``a polygon'', otherwise $ n $ is not defined.} $ P $ is a partition of $ P $ into rectangles without using Steiner points (i.e., all corners of all rectangles must lie on\bastien{Maybe add: the boundary of} $ P $). The stabbing number of such a partition is the maximum number of rectangles intersected by an axis-aligned segment lying in the interior of $ P $. In this paper, we examine the problem of computing conforming partitions with low stabbing number. We show that computing a conforming partition with stabbing number at most~$ 4 $ is $ NP $-hard, which strengthens a previously known hardness result [Durocher \& Mehrabi, Theor. Comput. Sci. 689: 157-168 (2017)] and eliminates the possibility for fixed-parameter-tractable algorithms parameterized by the stabbing number unless $ P = NP $. In contrast, we give (i) an $ O ( n \log n ) $-time\bastien{Reviewer request: changed from "linearithmic".} algorithm to decide whether a conforming partition with stabbing number~$ 2 $ exists, (ii) a fixed-parameter-tractable algorithm parameterized by both the stabbing number and treewidth of the pixelation of the polygon, and (iii) a fixed-parameter-tractable algorithm parameterized by the stabbing number for simple polygons in general position.

gDist: Efficient Distance Computation between 3D Meshes on GPU

from arXiv: Computational Geometry

Authors: Peng Fang, Wei Wang, Ruofeng Tong, Hailong Li, Min Tang

Computing maximum/minimum distances between 3D meshes is crucial for various applications, i.e., robotics, CAD, VR/AR, etc. In this work, we introduce a highly parallel algorithm (gDist) optimized for Graphics Processing Units (GPUs), which is capable of computing the distance between two meshes with over 15 million triangles in less than 0.4 milliseconds (Fig. 1). By testing on benchmarks with varying characteristics, the algorithm achieves remarkable speedups over prior CPU-based and GPU-based algorithms on a commodity GPU (NVIDIA GeForce RTX 4090). Notably, the algorithm consistently maintains high-speed performance, even in challenging scenarios that pose difficulties for prior algorithms.

Authors: Peng Fang, Wei Wang, Ruofeng Tong, Hailong Li, Min Tang

Computing maximum/minimum distances between 3D meshes is crucial for various applications, i.e., robotics, CAD, VR/AR, etc. In this work, we introduce a highly parallel algorithm (gDist) optimized for Graphics Processing Units (GPUs), which is capable of computing the distance between two meshes with over 15 million triangles in less than 0.4 milliseconds (Fig. 1). By testing on benchmarks with varying characteristics, the algorithm achieves remarkable speedups over prior CPU-based and GPU-based algorithms on a commodity GPU (NVIDIA GeForce RTX 4090). Notably, the algorithm consistently maintains high-speed performance, even in challenging scenarios that pose difficulties for prior algorithms.

Solving convex QPs with structured sparsity under indicator conditions

from arXiv: Data Structures and Algorithms

Authors: Daniel Bienstock, Tongtong Chen

We study convex optimization problems where disjoint blocks of variables are controlled by binary indicator variables that are also subject to conditions, e.g., cardinality. Several classes of important examples can be formulated in such a way that both the objective and the constraints are separable convex quadratics. We describe a family of polynomial-time approximation algorithms and negative complexity results.

Authors: Daniel Bienstock, Tongtong Chen

We study convex optimization problems where disjoint blocks of variables are controlled by binary indicator variables that are also subject to conditions, e.g., cardinality. Several classes of important examples can be formulated in such a way that both the objective and the constraints are separable convex quadratics. We describe a family of polynomial-time approximation algorithms and negative complexity results.

Explicit Two-Sided Vertex Expanders Beyond the Spectral Barrier

from arXiv: Data Structures and Algorithms

Authors: Jun-Ting Hsieh, Ting-Chun Lin, Sidhanth Mohanty, Ryan O'Donnell, Rachel Yun Zhang

We construct the first explicit two-sided vertex expanders that bypass the spectral barrier. Previously, the strongest known explicit vertex expanders were given by $d$-regular Ramanujan graphs, whose spectral properties imply that every small subset of vertices $S$ has at least $0.5d|S|$ distinct neighbors. However, it is possible to construct Ramanujan graphs containing a small set $S$ with no more than $0.5d|S|$ neighbors. In fact, no explicit construction was known to break the $0.5 d$-barrier. In this work, we give an explicit construction of an infinite family of $d$-regular graphs (for large enough $d$) where every small set expands by a factor of $\approx 0.6d$. More generally, for large enough $d_1,d_2$, we give an infinite family of $(d_1,d_2)$-biregular graphs where small sets on the left expand by a factor of $\approx 0.6d_1$, and small sets on the right expand by a factor of $\approx 0.6d_2$. In fact, our construction satisfies an even stronger property: small sets on the left and right have unique-neighbor expansion $0.6d_1$ and $0.6d_2$ respectively. Our construction follows the tripartite line product framework of Hsieh, McKenzie, Mohanty & Paredes, and instantiates it using the face-vertex incidence of the $4$-dimensional Ramanujan clique complex as its base component. As a key part of our analysis, we derive new bounds on the triangle density of small sets in the Ramanujan clique complex.

Authors: Jun-Ting Hsieh, Ting-Chun Lin, Sidhanth Mohanty, Ryan O'Donnell, Rachel Yun Zhang

We construct the first explicit two-sided vertex expanders that bypass the spectral barrier. Previously, the strongest known explicit vertex expanders were given by $d$-regular Ramanujan graphs, whose spectral properties imply that every small subset of vertices $S$ has at least $0.5d|S|$ distinct neighbors. However, it is possible to construct Ramanujan graphs containing a small set $S$ with no more than $0.5d|S|$ neighbors. In fact, no explicit construction was known to break the $0.5 d$-barrier. In this work, we give an explicit construction of an infinite family of $d$-regular graphs (for large enough $d$) where every small set expands by a factor of $\approx 0.6d$. More generally, for large enough $d_1,d_2$, we give an infinite family of $(d_1,d_2)$-biregular graphs where small sets on the left expand by a factor of $\approx 0.6d_1$, and small sets on the right expand by a factor of $\approx 0.6d_2$. In fact, our construction satisfies an even stronger property: small sets on the left and right have unique-neighbor expansion $0.6d_1$ and $0.6d_2$ respectively. Our construction follows the tripartite line product framework of Hsieh, McKenzie, Mohanty & Paredes, and instantiates it using the face-vertex incidence of the $4$-dimensional Ramanujan clique complex as its base component. As a key part of our analysis, we derive new bounds on the triangle density of small sets in the Ramanujan clique complex.

Maximization of Approximately Submodular Functions

from arXiv: Data Structures and Algorithms

Authors: Thibaut Horel, Yaron Singer

We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that $F$ is $\varepsilon$-approximately submodular if there exists a submodular function $f$ such that $(1-\varepsilon)f(S) \leq F(S)\leq (1+\varepsilon)f(S)$ for all subsets $S$. We are interested in characterizing the query-complexity of maximizing $F$ subject to a cardinality constraint $k$ as a function of the error level $\varepsilon>0$. We provide both lower and upper bounds: for $\varepsilon>n^{-1/2}$ we show an exponential query-complexity lower bound. In contrast, when $\varepsilon< {1}/{k}$ or under a stronger bounded curvature assumption, we give constant approximation algorithms.

Authors: Thibaut Horel, Yaron Singer

We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that $F$ is $\varepsilon$-approximately submodular if there exists a submodular function $f$ such that $(1-\varepsilon)f(S) \leq F(S)\leq (1+\varepsilon)f(S)$ for all subsets $S$. We are interested in characterizing the query-complexity of maximizing $F$ subject to a cardinality constraint $k$ as a function of the error level $\varepsilon>0$. We provide both lower and upper bounds: for $\varepsilon>n^{-1/2}$ we show an exponential query-complexity lower bound. In contrast, when $\varepsilon< {1}/{k}$ or under a stronger bounded curvature assumption, we give constant approximation algorithms.

Near-Optimal Averaging Samplers and Matrix Samplers

from arXiv: Data Structures and Algorithms

Authors: Zhiyang Xun, David Zuckerman

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta < \varepsilon$ and any constant $\alpha > 0$, our sampler uses $m + O(\log (1 / \delta))$ random bits to output $t = O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$, \[ \Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta. \] The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{\alpha})$ factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: \{0, 1\}^m \to \mathbb{C}^{d \times d}$ and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity $m + \tilde O (\log(d / \delta))$ and sample complexity $ O((\frac{1}{\varepsilon^2} \log \frac{d}{\delta})^{1 + \alpha})$ for any constant $\alpha > 0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $\alpha$ exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.

Authors: Zhiyang Xun, David Zuckerman

We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $\delta < \varepsilon$ and any constant $\alpha > 0$, our sampler uses $m + O(\log (1 / \delta))$ random bits to output $t = O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{1 + \alpha})$ samples $Z_1, \dots, Z_t \in \{0, 1\}^m$ such that for any function $f: \{0, 1\}^m \to [0, 1]$, \[ \Pr\left[\left|\frac{1}{t}\sum_{i=1}^t f(Z_i) - \mathbb{E}[f]\right| \leq \varepsilon\right] \geq 1 - \delta. \] The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((\frac{1}{\varepsilon^2} \log \frac{1}{\delta})^{\alpha})$ factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: \{0, 1\}^m \to \mathbb{C}^{d \times d}$ and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity $m + \tilde O (\log(d / \delta))$ and sample complexity $ O((\frac{1}{\varepsilon^2} \log \frac{d}{\delta})^{1 + \alpha})$ for any constant $\alpha > 0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $\alpha$ exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = \beta n$ for a large enough constant $\beta < 1$.

Computational Complexity of Envy-free and Exchange-stable Seat Arrangement Problems on Grid Graphs

from arXiv: Data Structures and Algorithms

Authors: Sota Kawase, Shuichi Miyazaki

The Seat Arrangement Problem is a problem of finding a desirable seat arrangement for given preferences of agents and a seat graph that represents a configuration of seats. In this paper, we consider decision problems of determining if an envy-free arrangement exists and an exchange-stable arrangement exists, when a seat graph is an $\ell \times m$ grid graph. When $\ell=1$, the seat graph is a path of length $m$ and both problems have been known to be NP-complete. In this paper, we extend it and show that both problems are NP-complete for any integer $\ell \geq 2$.

Authors: Sota Kawase, Shuichi Miyazaki

The Seat Arrangement Problem is a problem of finding a desirable seat arrangement for given preferences of agents and a seat graph that represents a configuration of seats. In this paper, we consider decision problems of determining if an envy-free arrangement exists and an exchange-stable arrangement exists, when a seat graph is an $\ell \times m$ grid graph. When $\ell=1$, the seat graph is a path of length $m$ and both problems have been known to be NP-complete. In this paper, we extend it and show that both problems are NP-complete for any integer $\ell \geq 2$.

Hardness Results on Characteristics for Elastic-Degenerated Strings

from arXiv: Data Structures and Algorithms

Authors: Dominik Köppl, Jannik Olbrich

Generalizations of plain strings have been proposed as a compact way to represent a collection of nearly identical sequences or to express uncertainty at specific text positions by enumerating all possibilities. While a plain string stores a character at each of its positions, generalizations consider a set of characters (indeterminate strings), a set of strings of equal length (generalized degenerate strings, or shortly GD strings), or a set of strings of arbitrary lengths (elastic-degenerate strings, or shortly ED strings). These generalizations are of importance to compactly represent such type of data, and find applications in bioinformatics for representing and maintaining a set of genetic sequences of the same taxonomy or a multiple sequence alignment. To be of use, attention has been drawn to answering various query types such as pattern matching or measuring similarity of ED strings by generalizing techniques known to plain strings. However, for some types of queries, it has been shown that a generalization of a polynomial-time solvable query on classic strings becomes NP-hard on ED strings, e.g. [Russo et al.,2022]. In that light, we wonder about other types of queries, which are of particular interest to bioinformatics: the search for the longest repeating factor, unique substrings, absent words, anti-powers, and longest previous factors. While we obtain a polynomial time algorithm for the first problem on ED strings, we show that all others are NP-hard to compute, some of them even under the restriction that the input can be modelled as an indeterminate or GD string.

Authors: Dominik Köppl, Jannik Olbrich

Generalizations of plain strings have been proposed as a compact way to represent a collection of nearly identical sequences or to express uncertainty at specific text positions by enumerating all possibilities. While a plain string stores a character at each of its positions, generalizations consider a set of characters (indeterminate strings), a set of strings of equal length (generalized degenerate strings, or shortly GD strings), or a set of strings of arbitrary lengths (elastic-degenerate strings, or shortly ED strings). These generalizations are of importance to compactly represent such type of data, and find applications in bioinformatics for representing and maintaining a set of genetic sequences of the same taxonomy or a multiple sequence alignment. To be of use, attention has been drawn to answering various query types such as pattern matching or measuring similarity of ED strings by generalizing techniques known to plain strings. However, for some types of queries, it has been shown that a generalization of a polynomial-time solvable query on classic strings becomes NP-hard on ED strings, e.g. [Russo et al.,2022]. In that light, we wonder about other types of queries, which are of particular interest to bioinformatics: the search for the longest repeating factor, unique substrings, absent words, anti-powers, and longest previous factors. While we obtain a polynomial time algorithm for the first problem on ED strings, we show that all others are NP-hard to compute, some of them even under the restriction that the input can be modelled as an indeterminate or GD string.

Towards Scalable and Practical Batch-Dynamic Connectivity

from arXiv: Data Structures and Algorithms

Authors: Quinten De Man, Laxman Dhulipala, Adam Karczmarz, Jakub Łącki, Julian Shun, Zhongqi Wang

We study the problem of dynamically maintaining the connected components of an undirected graph subject to edge insertions and deletions. We give the first parallel algorithm for the problem which is work-efficient, supports batches of updates, runs in polylogarithmic depth, and uses only linear total space. The existing algorithms for the problem either use super-linear space, do not come with strong theoretical bounds, or are not parallel. On the empirical side, we provide the first implementation of the cluster forest algorithm, the first linear-space and poly-logarithmic update time algorithm for dynamic connectivity. Experimentally, we find that our algorithm uses up to 19.7x less space and is up to 6.2x faster than the level-set algorithm of HDT, arguably the most widely-implemented dynamic connectivity algorithm with strong theoretical guarantees.

Authors: Quinten De Man, Laxman Dhulipala, Adam Karczmarz, Jakub Łącki, Julian Shun, Zhongqi Wang

We study the problem of dynamically maintaining the connected components of an undirected graph subject to edge insertions and deletions. We give the first parallel algorithm for the problem which is work-efficient, supports batches of updates, runs in polylogarithmic depth, and uses only linear total space. The existing algorithms for the problem either use super-linear space, do not come with strong theoretical bounds, or are not parallel. On the empirical side, we provide the first implementation of the cluster forest algorithm, the first linear-space and poly-logarithmic update time algorithm for dynamic connectivity. Experimentally, we find that our algorithm uses up to 19.7x less space and is up to 6.2x faster than the level-set algorithm of HDT, arguably the most widely-implemented dynamic connectivity algorithm with strong theoretical guarantees.

A Bicriterion Concentration Inequality and Prophet Inequalities for $k$-Fold Matroid Unions

from arXiv: Data Structures and Algorithms

Authors: Noga Alon, Nick Gravin, Tristan Pollner, Aviad Rubinstein, Hongao Wang, S. Matthew Weinberg, Qianfan Zhang

We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a prophet inequality instance on that matroid whose optimal competitive ration is $\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for any $k$-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a $1$-Lipschitz function that is not (approximately) self-bounding.

Authors: Noga Alon, Nick Gravin, Tristan Pollner, Aviad Rubinstein, Hongao Wang, S. Matthew Weinberg, Qianfan Zhang

We investigate prophet inequalities with competitive ratios approaching $1$, seeking to generalize $k$-uniform matroids. We first show that large girth does not suffice: for all $k$, there exists a matroid of girth $\geq k$ and a prophet inequality instance on that matroid whose optimal competitive ration is $\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a prophet inequality with competitive ratio $1-O(\sqrt{\frac{\log k}{k}})$ for any $k$-fold matroid union. Our prophet inequality follows from an online contention resolution scheme. The key technical ingredient in our online contention resolution scheme is a novel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz functions over independent items which may be of independent interest. Applied to our particular setting, our bicriterion concentration inequality yields "Chernoff-strength" concentration for a $1$-Lipschitz function that is not (approximately) self-bounding.

Distributed Maximum Flow in Planar Graphs

from arXiv: Data Structures and Algorithms

Authors: Yaseen Abd-Elhaleem, Michal Dory, Merav Parter, Oren Weimann

The dual of a planar graph $G$ is a planar graph $G^*$ that has a vertex for each face of $G$ and an edge for each pair of adjacent faces of $G$. The profound relationship between a planar graph and its dual has been the algorithmic basis for solving numerous (centralized) classical problems on planar graphs. In the distributed setting however, the only use of planar duality is for finding a recursive decomposition of $G$ [DISC 2017, STOC 2019]. We extend the distributed algorithmic toolkit to work on the dual graph $G^*$. These tools can then facilitate various algorithms on $G$ by solving a suitable dual problem on $G^*$. Given a directed planar graph $G$ with positive and negative edge-lengths and hop-diameter $D$, our key result is an $\tilde{O}(D^2)$-round algorithm for Single Source Shortest Paths on $G^*$, which then implies an $\tilde{O}(D^2)$-round algorithm for Maximum $st$-Flow on $G$. Prior to our work, no $\tilde{O}(\text{poly}(D))$-round algorithm was known for Maximum $st$-Flow. We further obtain a $D\cdot n^{o(1)}$-rounds $(1-\epsilon)$-approximation algorithm for Maximum $st$-Flow on $G$ when $G$ is undirected and $st$-planar. Finally, we give a near optimal $\tilde O(D)$-round algorithm for computing the weighted girth of $G$. The main challenges in our work are that $G^*$ is not the communication graph (e.g., a vertex of $G$ is mapped to multiple vertices of $G^*$), and that the diameter of $G^*$ can be much larger than $D$ (i.e., possibly by a linear factor). We overcome these challenges by carefully defining and maintaining subgraphs of the dual graph $G^*$ while applying the recursive decomposition on the primal graph $G$. The main technical difficulty, is that along the recursive decomposition, a face of $G$ gets shattered into (disconnected) components yet we still need to treat it as a dual node.

Authors: Yaseen Abd-Elhaleem, Michal Dory, Merav Parter, Oren Weimann

The dual of a planar graph $G$ is a planar graph $G^*$ that has a vertex for each face of $G$ and an edge for each pair of adjacent faces of $G$. The profound relationship between a planar graph and its dual has been the algorithmic basis for solving numerous (centralized) classical problems on planar graphs. In the distributed setting however, the only use of planar duality is for finding a recursive decomposition of $G$ [DISC 2017, STOC 2019]. We extend the distributed algorithmic toolkit to work on the dual graph $G^*$. These tools can then facilitate various algorithms on $G$ by solving a suitable dual problem on $G^*$. Given a directed planar graph $G$ with positive and negative edge-lengths and hop-diameter $D$, our key result is an $\tilde{O}(D^2)$-round algorithm for Single Source Shortest Paths on $G^*$, which then implies an $\tilde{O}(D^2)$-round algorithm for Maximum $st$-Flow on $G$. Prior to our work, no $\tilde{O}(\text{poly}(D))$-round algorithm was known for Maximum $st$-Flow. We further obtain a $D\cdot n^{o(1)}$-rounds $(1-\epsilon)$-approximation algorithm for Maximum $st$-Flow on $G$ when $G$ is undirected and $st$-planar. Finally, we give a near optimal $\tilde O(D)$-round algorithm for computing the weighted girth of $G$. The main challenges in our work are that $G^*$ is not the communication graph (e.g., a vertex of $G$ is mapped to multiple vertices of $G^*$), and that the diameter of $G^*$ can be much larger than $D$ (i.e., possibly by a linear factor). We overcome these challenges by carefully defining and maintaining subgraphs of the dual graph $G^*$ while applying the recursive decomposition on the primal graph $G$. The main technical difficulty, is that along the recursive decomposition, a face of $G$ gets shattered into (disconnected) components yet we still need to treat it as a dual node.

Hash & Adjust: Competitive Demand-Aware Consistent Hashing

from arXiv: Data Structures and Algorithms

Authors: Arash Pourdamghani, Chen Avin, Robert Sama, Maryam Shiran, Stefan Schmid

Distributed systems often serve dynamic workloads and resource demands evolve over time. Such a temporal behavior stands in contrast to the static and demand-oblivious nature of most data structures used by these systems. In this paper, we are particularly interested in consistent hashing, a fundamental building block in many large distributed systems. Our work is motivated by the hypothesis that a more adaptive approach to consistent hashing can leverage structure in the demand, and hence improve storage utilization and reduce access time. We initiate the study of demand-aware consistent hashing. Our main contribution is H&A, a constant-competitive online algorithm (i.e., it comes with provable performance guarantees over time). H&A is demand-aware and optimizes its internal structure to enable faster access times, while offering a high utilization of storage. We further evaluate H&A empirically.

Authors: Arash Pourdamghani, Chen Avin, Robert Sama, Maryam Shiran, Stefan Schmid

Distributed systems often serve dynamic workloads and resource demands evolve over time. Such a temporal behavior stands in contrast to the static and demand-oblivious nature of most data structures used by these systems. In this paper, we are particularly interested in consistent hashing, a fundamental building block in many large distributed systems. Our work is motivated by the hypothesis that a more adaptive approach to consistent hashing can leverage structure in the demand, and hence improve storage utilization and reduce access time. We initiate the study of demand-aware consistent hashing. Our main contribution is H&A, a constant-competitive online algorithm (i.e., it comes with provable performance guarantees over time). H&A is demand-aware and optimizes its internal structure to enable faster access times, while offering a high utilization of storage. We further evaluate H&A empirically.

The Complexity Landscape of Dynamic Distributed Subgraph Finding

from arXiv: Data Structures and Algorithms

Authors: Yi-Jun Chang, Lyuting Chen, Yanyu Chen, Gopinath Mishra, Mingyang Yang

Bonne and Censor-Hillel (ICALP 2019) initiated the study of distributed subgraph finding in dynamic networks of limited bandwidth. For the case where the target subgraph is a clique, they determined the tight bandwidth complexity bounds in nearly all settings. However, several open questions remain, and very little is known about finding subgraphs beyond cliques. In this work, we consider these questions and explore subgraphs beyond cliques. For finding cliques, we establish an $\Omega(\log \log n)$ bandwidth lower bound for one-round membership-detection under edge insertions only and an $\Omega(\log \log \log n)$ bandwidth lower bound for one-round detection under both edge insertions and node insertions. Moreover, we demonstrate new algorithms to show that our lower bounds are tight in bounded-degree networks when the target subgraph is a triangle. Prior to our work, no lower bounds were known for these problems. For finding subgraphs beyond cliques, we present a complete characterization of the bandwidth complexity of the membership-listing problem for every target subgraph, every number of rounds, and every type of topological change: node insertions, node deletions, edge insertions, and edge deletions. We also show partial characterizations for one-round membership-detection and listing.

Authors: Yi-Jun Chang, Lyuting Chen, Yanyu Chen, Gopinath Mishra, Mingyang Yang

Bonne and Censor-Hillel (ICALP 2019) initiated the study of distributed subgraph finding in dynamic networks of limited bandwidth. For the case where the target subgraph is a clique, they determined the tight bandwidth complexity bounds in nearly all settings. However, several open questions remain, and very little is known about finding subgraphs beyond cliques. In this work, we consider these questions and explore subgraphs beyond cliques. For finding cliques, we establish an $\Omega(\log \log n)$ bandwidth lower bound for one-round membership-detection under edge insertions only and an $\Omega(\log \log \log n)$ bandwidth lower bound for one-round detection under both edge insertions and node insertions. Moreover, we demonstrate new algorithms to show that our lower bounds are tight in bounded-degree networks when the target subgraph is a triangle. Prior to our work, no lower bounds were known for these problems. For finding subgraphs beyond cliques, we present a complete characterization of the bandwidth complexity of the membership-listing problem for every target subgraph, every number of rounds, and every type of topological change: node insertions, node deletions, edge insertions, and edge deletions. We also show partial characterizations for one-round membership-detection and listing.

Efficient Sample-optimal Learning of Gaussian Tree Models via Sample-optimal Testing of Gaussian Mutual Information

from arXiv: Data Structures and Algorithms

Authors: Sutanu Gayen, Sanket Kale, Sayantan Sen

Learning high-dimensional distributions is a significant challenge in machine learning and statistics. Classical research has mostly concentrated on asymptotic analysis of such data under suitable assumptions. While existing works [Bhattacharyya et al.: SICOMP 2023, Daskalakis et al.: STOC 2021, Choo et al.: ALT 2024] focus on discrete distributions, the current work addresses the tree structure learning problem for Gaussian distributions, providing efficient algorithms with solid theoretical guarantees. This is crucial as real-world distributions are often continuous and differ from the discrete scenarios studied in prior works. In this work, we design a conditional mutual information tester for Gaussian random variables that can test whether two Gaussian random variables are independent, or their conditional mutual information is at least $\varepsilon$, for some parameter $\varepsilon \in (0,1)$ using $\mathcal{O}(\varepsilon^{-1})$ samples which we show to be near-optimal. In contrast, an additive estimation would require $\Omega(\varepsilon^{-2})$ samples. Our upper bound technique uses linear regression on a pair of suitably transformed random variables. Importantly, we show that the chain rule of conditional mutual information continues to hold for the estimated (conditional) mutual information. As an application of such a mutual information tester, we give an efficient $\varepsilon$-approximate structure-learning algorithm for an $n$-variate Gaussian tree model that takes $\widetilde{\Theta}(n\varepsilon^{-1})$ samples which we again show to be near-optimal. In contrast, when the underlying Gaussian model is not known to be tree-structured, we show that $\widetilde{{{\Theta}}}(n^2\varepsilon^{-2})$ samples are necessary and sufficient to output an $\varepsilon$-approximate tree structure. We perform extensive experiments that corroborate our theoretical convergence bounds.

Authors: Sutanu Gayen, Sanket Kale, Sayantan Sen

Learning high-dimensional distributions is a significant challenge in machine learning and statistics. Classical research has mostly concentrated on asymptotic analysis of such data under suitable assumptions. While existing works [Bhattacharyya et al.: SICOMP 2023, Daskalakis et al.: STOC 2021, Choo et al.: ALT 2024] focus on discrete distributions, the current work addresses the tree structure learning problem for Gaussian distributions, providing efficient algorithms with solid theoretical guarantees. This is crucial as real-world distributions are often continuous and differ from the discrete scenarios studied in prior works. In this work, we design a conditional mutual information tester for Gaussian random variables that can test whether two Gaussian random variables are independent, or their conditional mutual information is at least $\varepsilon$, for some parameter $\varepsilon \in (0,1)$ using $\mathcal{O}(\varepsilon^{-1})$ samples which we show to be near-optimal. In contrast, an additive estimation would require $\Omega(\varepsilon^{-2})$ samples. Our upper bound technique uses linear regression on a pair of suitably transformed random variables. Importantly, we show that the chain rule of conditional mutual information continues to hold for the estimated (conditional) mutual information. As an application of such a mutual information tester, we give an efficient $\varepsilon$-approximate structure-learning algorithm for an $n$-variate Gaussian tree model that takes $\widetilde{\Theta}(n\varepsilon^{-1})$ samples which we again show to be near-optimal. In contrast, when the underlying Gaussian model is not known to be tree-structured, we show that $\widetilde{{{\Theta}}}(n^2\varepsilon^{-2})$ samples are necessary and sufficient to output an $\varepsilon$-approximate tree structure. We perform extensive experiments that corroborate our theoretical convergence bounds.

SpiderDAN: Matching Augmentation in Demand-Aware Networks

from arXiv: Data Structures and Algorithms

Authors: Aleksander Figiel, Darya Melnyk, André Nichterlein, Arash Pourdamghani, Stefan Schmid

Graph augmentation is a fundamental and well-studied problem that arises in network optimization. We consider a new variant of this model motivated by reconfigurable communication networks. In this variant, we consider a given physical network and the measured communication demands between the nodes. Our goal is to augment the given physical network with a matching, so that the shortest path lengths in the augmented network, weighted with the demands, are minimal.We prove that this problem is NP-hard, even if the physical network is a cycle. We then use results from demand-aware network design to provide a constant-factor approximation algorithm for adding a matching in case that only a few nodes in the network cause almost all the communication. For general real-world communication patterns, we design and evaluate a series of heuristics that can deal with arbitrary graphs as the underlying network structure. Our algorithms are validated experimentally using real-world traces (from e.g., Facebook) of data centers.

Authors: Aleksander Figiel, Darya Melnyk, André Nichterlein, Arash Pourdamghani, Stefan Schmid

Graph augmentation is a fundamental and well-studied problem that arises in network optimization. We consider a new variant of this model motivated by reconfigurable communication networks. In this variant, we consider a given physical network and the measured communication demands between the nodes. Our goal is to augment the given physical network with a matching, so that the shortest path lengths in the augmented network, weighted with the demands, are minimal.We prove that this problem is NP-hard, even if the physical network is a cycle. We then use results from demand-aware network design to provide a constant-factor approximation algorithm for adding a matching in case that only a few nodes in the network cause almost all the communication. For general real-world communication patterns, we design and evaluate a series of heuristics that can deal with arbitrary graphs as the underlying network structure. Our algorithms are validated experimentally using real-world traces (from e.g., Facebook) of data centers.

On the compressiveness of the Burrows-Wheeler transform

from arXiv: Data Structures and Algorithms

Authors: Hideo Bannai, Tomohiro I, Yuto Nakashima

The Burrows-Wheeler transform (BWT) is a reversible transform that converts a string $w$ into another string $\mathsf{BWT}(w)$. The size of the run-length encoded BWT (RLBWT) can be interpreted as a measure of repetitiveness in the class of representations called dictionary compression which are essentially representations based on copy and paste operations. In this paper, we shed new light on the compressiveness of BWT and the bijective BWT (BBWT). We first extend previous results on the relations of their run-length compressed sizes $r$ and $r_B$. We also show that the so-called ``clustering effect'' of BWT and BBWT can be captured by measures other than empirical entropy or run-length encoding. In particular, we show that BWT and BBWT do not increase the repetitiveness of the string with respect to various measures based on dictionary compression by more than a polylogarithmic factor. Furthermore, we show that there exists an infinite family of strings that are maximally incompressible by any dictionary compression measure, but become very compressible after applying BBWT. An interesting implication of this result is that it is possible to transcend dictionary compression in some cases by simply applying BBWT before applying dictionary compression.

Authors: Hideo Bannai, Tomohiro I, Yuto Nakashima

The Burrows-Wheeler transform (BWT) is a reversible transform that converts a string $w$ into another string $\mathsf{BWT}(w)$. The size of the run-length encoded BWT (RLBWT) can be interpreted as a measure of repetitiveness in the class of representations called dictionary compression which are essentially representations based on copy and paste operations. In this paper, we shed new light on the compressiveness of BWT and the bijective BWT (BBWT). We first extend previous results on the relations of their run-length compressed sizes $r$ and $r_B$. We also show that the so-called ``clustering effect'' of BWT and BBWT can be captured by measures other than empirical entropy or run-length encoding. In particular, we show that BWT and BBWT do not increase the repetitiveness of the string with respect to various measures based on dictionary compression by more than a polylogarithmic factor. Furthermore, we show that there exists an infinite family of strings that are maximally incompressible by any dictionary compression measure, but become very compressible after applying BBWT. An interesting implication of this result is that it is possible to transcend dictionary compression in some cases by simply applying BBWT before applying dictionary compression.

Massively Parallel Maximum Coverage Revisited

from arXiv: Data Structures and Algorithms

Authors: Thai Bui, Hoa T. Vu

We study the maximum set coverage problem in the massively parallel model. In this setting, $m$ sets that are subsets of a universe of $n$ elements are distributed among $m$ machines. In each round, these machines can communicate with each other, subject to the memory constraint that no machine may use more than $\tilde{O}(n)$ memory. The objective is to find the $k$ sets whose coverage is maximized. We consider the regime where $k = \Omega(m)$, $m = O(n)$, and each machine has $\tilde{O}(n)$ memory. Maximum coverage is a special case of the submodular maximization problem subject to a cardinality constraint. This problem can be approximated to within a $1-1/e$ factor using the greedy algorithm, but this approach is not directly applicable to parallel and distributed models. When $k = \Omega(m)$, to obtain a $1-1/e-\epsilon$ approximation, previous work either requires $\tilde{O}(mn)$ memory per machine which is not interesting compared to the trivial algorithm that sends the entire input to a single machine, or requires $2^{O(1/\epsilon)} n$ memory per machine which is prohibitively expensive even for a moderately small value $\epsilon$. Our result is a randomized $(1-1/e-\epsilon)$-approximation algorithm that uses $O(1/\epsilon^3 \cdot \log m \cdot (\log (1/\epsilon) + \log m))$ rounds. Our algorithm involves solving a slightly transformed linear program of the maximum coverage problem using the multiplicative weights update method, classic techniques in parallel computing such as parallel prefix, and various combinatorial arguments.

Authors: Thai Bui, Hoa T. Vu

We study the maximum set coverage problem in the massively parallel model. In this setting, $m$ sets that are subsets of a universe of $n$ elements are distributed among $m$ machines. In each round, these machines can communicate with each other, subject to the memory constraint that no machine may use more than $\tilde{O}(n)$ memory. The objective is to find the $k$ sets whose coverage is maximized. We consider the regime where $k = \Omega(m)$, $m = O(n)$, and each machine has $\tilde{O}(n)$ memory. Maximum coverage is a special case of the submodular maximization problem subject to a cardinality constraint. This problem can be approximated to within a $1-1/e$ factor using the greedy algorithm, but this approach is not directly applicable to parallel and distributed models. When $k = \Omega(m)$, to obtain a $1-1/e-\epsilon$ approximation, previous work either requires $\tilde{O}(mn)$ memory per machine which is not interesting compared to the trivial algorithm that sends the entire input to a single machine, or requires $2^{O(1/\epsilon)} n$ memory per machine which is prohibitively expensive even for a moderately small value $\epsilon$. Our result is a randomized $(1-1/e-\epsilon)$-approximation algorithm that uses $O(1/\epsilon^3 \cdot \log m \cdot (\log (1/\epsilon) + \log m))$ rounds. Our algorithm involves solving a slightly transformed linear program of the maximum coverage problem using the multiplicative weights update method, classic techniques in parallel computing such as parallel prefix, and various combinatorial arguments.

Reliable Learning of Halfspaces under Gaussian Marginals

from arXiv: Data Structures and Algorithms

Authors: Ilias Diakonikolas, Lisheng Ren, Nikos Zarifis

We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012). The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on $\mathbb{R}^d$ with sample and computational complexity $$d^{O(\log (\min\{1/\alpha, 1/\epsilon\}))}\min (2^{\log(1/\epsilon)^{O(\log (1/\alpha))}},2^{\mathrm{poly}(1/\epsilon)})\;,$$ where $\epsilon$ is the excess error and $\alpha$ is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the $d^{\Omega(\log (1/\alpha))}$ dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.

Authors: Ilias Diakonikolas, Lisheng Ren, Nikos Zarifis

We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012). The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on $\mathbb{R}^d$ with sample and computational complexity $$d^{O(\log (\min\{1/\alpha, 1/\epsilon\}))}\min (2^{\log(1/\epsilon)^{O(\log (1/\alpha))}},2^{\mathrm{poly}(1/\epsilon)})\;,$$ where $\epsilon$ is the excess error and $\alpha$ is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the $d^{\Omega(\log (1/\alpha))}$ dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.

Learning the Sherrington-Kirkpatrick Model Even at Low Temperature

from arXiv: Data Structures and Algorithms

Authors: Gautam Chandrasekaran, Adam Klivans

We consider the fundamental problem of learning the parameters of an undirected graphical model or Markov Random Field (MRF) in the setting where the edge weights are chosen at random. For Ising models, we show that a multiplicative-weight update algorithm due to Klivans and Meka learns the parameters in polynomial time for any inverse temperature $\beta \leq \sqrt{\log n}$. This immediately yields an algorithm for learning the Sherrington-Kirkpatrick (SK) model beyond the high-temperature regime of $\beta < 1$. Prior work breaks down at $\beta = 1$ and requires heavy machinery from statistical physics or functional inequalities. In contrast, our analysis is relatively simple and uses only subgaussian concentration. Our results extend to MRFs of higher order (such as pure $p$-spin models), where even results in the high-temperature regime were not known.

Authors: Gautam Chandrasekaran, Adam Klivans

We consider the fundamental problem of learning the parameters of an undirected graphical model or Markov Random Field (MRF) in the setting where the edge weights are chosen at random. For Ising models, we show that a multiplicative-weight update algorithm due to Klivans and Meka learns the parameters in polynomial time for any inverse temperature $\beta \leq \sqrt{\log n}$. This immediately yields an algorithm for learning the Sherrington-Kirkpatrick (SK) model beyond the high-temperature regime of $\beta < 1$. Prior work breaks down at $\beta = 1$ and requires heavy machinery from statistical physics or functional inequalities. In contrast, our analysis is relatively simple and uses only subgaussian concentration. Our results extend to MRFs of higher order (such as pure $p$-spin models), where even results in the high-temperature regime were not known.

Approximation algorithms for non-sequential star packing problems

from arXiv: Data Structures and Algorithms

Authors: Mengyuan Hu, An Zhang, Yong Chen, Mingyang Gong, Guohui Lin

For a positive integer $k \ge 1$, a $k$-star ($k^+$-star, $k^-$-star, respectively) is a connected graph containing a degree-$\ell$ vertex and $\ell$ degree-$1$ vertices, where $\ell = k$ ($\ell \ge k$, $1 \le \ell \le k$, respectively). The $k^+$-star packing problem is to cover as many vertices of an input graph $G$ as possible using vertex-disjoint $k^+$-stars in $G$; and given $k > t \ge 1$, the $k^-/t$-star packing problem is to cover as many vertices of $G$ as possible using vertex-disjoint $k^-$-stars but no $t$-stars in $G$. Both problems are NP-hard for any fixed $k \ge 2$. We present a $(1 + \frac {k^2}{2k+1})$- and a $\frac 32$-approximation algorithms for the $k^+$-star packing problem when $k \ge 3$ and $k = 2$, respectively, and a $(1 + \frac 1{t + 1 + 1/k})$-approximation algorithm for the $k^-/t$-star packing problem when $k > t \ge 2$. They are all local search algorithms and they improve the best known approximation algorithms for the problems, respectively.

Authors: Mengyuan Hu, An Zhang, Yong Chen, Mingyang Gong, Guohui Lin

For a positive integer $k \ge 1$, a $k$-star ($k^+$-star, $k^-$-star, respectively) is a connected graph containing a degree-$\ell$ vertex and $\ell$ degree-$1$ vertices, where $\ell = k$ ($\ell \ge k$, $1 \le \ell \le k$, respectively). The $k^+$-star packing problem is to cover as many vertices of an input graph $G$ as possible using vertex-disjoint $k^+$-stars in $G$; and given $k > t \ge 1$, the $k^-/t$-star packing problem is to cover as many vertices of $G$ as possible using vertex-disjoint $k^-$-stars but no $t$-stars in $G$. Both problems are NP-hard for any fixed $k \ge 2$. We present a $(1 + \frac {k^2}{2k+1})$- and a $\frac 32$-approximation algorithms for the $k^+$-star packing problem when $k \ge 3$ and $k = 2$, respectively, and a $(1 + \frac 1{t + 1 + 1/k})$-approximation algorithm for the $k^-/t$-star packing problem when $k > t \ge 2$. They are all local search algorithms and they improve the best known approximation algorithms for the problems, respectively.