Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Friday, October 04

Tenure-track faculty positions in CS at Harvard University (apply by December 1, 2024)

from CCI: jobs

The Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) seeks applicants for a tenure-track position in Computer Science, with an expected start date of July 1, 2025. We invite applications from all areas of Computer Science. (Other adjacent departments, like Statistics and Applied Math, are also hiring and may be of interest […]

The Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) seeks applicants for a tenure-track position in Computer Science, with an expected start date of July 1, 2025. We invite applications from all areas of Computer Science.
(Other adjacent departments, like Statistics and Applied Math, are also hiring and may be of interest to TCS candidates.)

Website: https://academicpositions.harvard.edu/postings/14166
Email: jmileski@g.harvard.edu

By shacharlovett

Inapproximability of Sparsest Vector in a Real Subspace

from arXiv: Computational Complexity

Authors: Vijay Bhattiprolu, Euiwoong Lee

We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace. We show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor (or almost polynomial factors in quasipolynomial time). We recover as a corollary state of the art inapproximability for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem. We are inspired by the homogenization framework from the inapproximability theory of minimum distance problems (MDC) in integer lattices and error correcting codes. We use a combination of (a) \emph{product testing via tensor codes} and (b) \emph{encoding an assignment as a coset of a random code in higher dimensional space} in order to embed non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of MDC over finite fields, and (b) is inspired by Micciancio's semi-derandomization of hardness of SVP. Our reduction involves the challenge of performing (a) over the reals. We prove that tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin. Our main motivation in this work is the development of inapproximability theory for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.

Authors: Vijay Bhattiprolu, Euiwoong Lee

We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace. We show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor (or almost polynomial factors in quasipolynomial time). We recover as a corollary state of the art inapproximability for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem. We are inspired by the homogenization framework from the inapproximability theory of minimum distance problems (MDC) in integer lattices and error correcting codes. We use a combination of (a) \emph{product testing via tensor codes} and (b) \emph{encoding an assignment as a coset of a random code in higher dimensional space} in order to embed non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of MDC over finite fields, and (b) is inspired by Micciancio's semi-derandomization of hardness of SVP. Our reduction involves the challenge of performing (a) over the reals. We prove that tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin. Our main motivation in this work is the development of inapproximability theory for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.

Approximate Degrees of Multisymmetric Properties with Application to Quantum Claw Detection

from arXiv: Computational Complexity

Authors: Seiichiro Tani

The claw problem is central in the fields of theoretical computer science as well as cryptography. The optimal quantum query complexity of the problem is known to be $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ for input functions $f\colon [F]\to Z$ and $g\colon [G]\to Z$. However, the lower bound was proved when the range $Z$ is sufficiently large (i.e., $|{Z}|=\Omega(FG)$). The current paper proves the lower bound holds even for every smaller range $Z$ with $|{Z}|\ge F+G$. This implies that $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ is tight for every such range. In addition, the lower bound $\Omega\left(\sqrt{G}+F^{1/3}G^{1/6}M^{1/6}\right)$ is provided for even smaller range $Z=[M]$ with every $M\in [2,F+G]$ by reducing the claw problem for $|{Z}|= F+G$. The proof technique is general enough to apply to any $k$-symmetric property (e.g., the $k$-claw problem), i.e., the Boolean function $\Phi$ on the set of $k$ functions with different-size domains and a common range such that $\Phi$ is invariant under the permutations over each domain and the permutations over the range. More concretely, it generalizes Ambainis's argument [Theory of Computing, 1(1):37-46] to the multiple-function case by using the notion of multisymmetric polynomials.

Authors: Seiichiro Tani

The claw problem is central in the fields of theoretical computer science as well as cryptography. The optimal quantum query complexity of the problem is known to be $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ for input functions $f\colon [F]\to Z$ and $g\colon [G]\to Z$. However, the lower bound was proved when the range $Z$ is sufficiently large (i.e., $|{Z}|=\Omega(FG)$). The current paper proves the lower bound holds even for every smaller range $Z$ with $|{Z}|\ge F+G$. This implies that $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ is tight for every such range. In addition, the lower bound $\Omega\left(\sqrt{G}+F^{1/3}G^{1/6}M^{1/6}\right)$ is provided for even smaller range $Z=[M]$ with every $M\in [2,F+G]$ by reducing the claw problem for $|{Z}|= F+G$. The proof technique is general enough to apply to any $k$-symmetric property (e.g., the $k$-claw problem), i.e., the Boolean function $\Phi$ on the set of $k$ functions with different-size domains and a common range such that $\Phi$ is invariant under the permutations over each domain and the permutations over the range. More concretely, it generalizes Ambainis's argument [Theory of Computing, 1(1):37-46] to the multiple-function case by using the notion of multisymmetric polynomials.

The Role of piracy in quantum proofs

from arXiv: Computational Complexity

Authors: Anne Broadbent, Alex B. Grilo, Supartha Podder, Jamie Sikora

A well-known feature of quantum information is that it cannot, in general, be cloned. Recently, a number of quantum-enabled information-processing tasks have demonstrated various forms of uncloneability; among these forms, piracy is an adversarial model that gives maximal power to the adversary, in controlling both a cloning-type attack, as well as the evaluation/verification stage. Here, we initiate the study of anti-piracy proof systems, which are proof systems that inherently prevent piracy attacks. We define anti-piracy proof systems, demonstrate such a proof system for an oracle problem, and also describe a candidate anti-piracy proof system for NP. We also study quantum proof systems that are cloneable and settle the famous QMA vs. QMA(2) debate in this setting. Lastly, we discuss how one can approach the QMA vs. QCMA question, by studying its cloneable variants.

Authors: Anne Broadbent, Alex B. Grilo, Supartha Podder, Jamie Sikora

A well-known feature of quantum information is that it cannot, in general, be cloned. Recently, a number of quantum-enabled information-processing tasks have demonstrated various forms of uncloneability; among these forms, piracy is an adversarial model that gives maximal power to the adversary, in controlling both a cloning-type attack, as well as the evaluation/verification stage. Here, we initiate the study of anti-piracy proof systems, which are proof systems that inherently prevent piracy attacks. We define anti-piracy proof systems, demonstrate such a proof system for an oracle problem, and also describe a candidate anti-piracy proof system for NP. We also study quantum proof systems that are cloneable and settle the famous QMA vs. QMA(2) debate in this setting. Lastly, we discuss how one can approach the QMA vs. QCMA question, by studying its cloneable variants.

Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks

from arXiv: Computational Complexity

Authors: Sammy Khalife, Josué Tonelli-Cueto

Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.

Authors: Sammy Khalife, Josué Tonelli-Cueto

Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.

A fast algorithm for computing a planar support for non-piercing rectangles

from arXiv: Computational Geometry

Authors: Ambar Pal, Rajiv Raman, Saurabh Ray, Karamjeet Singh

For a hypergraph $\mathcal{H}=(X,\mathcal{E})$ a \emph{support} is a graph $G$ on $X$ such that for each $E\in\mathcal{E}$, the induced subgraph of $G$ on the elements in $E$ is connected. If $G$ is planar, we call it a planar support. A set of axis parallel rectangles $\mathcal{R}$ forms a non-piercing family if for any $R_1, R_2 \in \mathcal{R}$, $R_1 \setminus R_2$ is connected. Given a set $P$ of $n$ points in $\mathbb{R}^2$ and a set $\mathcal{R}$ of $m$ \emph{non-piercing} axis-aligned rectangles, we give an algorithm for computing a planar support for the hypergraph $(P,\mathcal{R})$ in $O(n\log^2 n + (n+m)\log m)$ time, where each $R\in\mathcal{R}$ defines a hyperedge consisting of all points of $P$ contained in~$R$. We use this result to show that if for a family of axis-parallel rectangles, any point in the plane is contained in at most $k$ pairwise \emph{crossing} rectangles (a pair of intersecting rectangles such that neither contains a corner of the other is called a crossing pair of rectangles), then we can obtain a support as the union of $k$ planar graphs.

Authors: Ambar Pal, Rajiv Raman, Saurabh Ray, Karamjeet Singh

For a hypergraph $\mathcal{H}=(X,\mathcal{E})$ a \emph{support} is a graph $G$ on $X$ such that for each $E\in\mathcal{E}$, the induced subgraph of $G$ on the elements in $E$ is connected. If $G$ is planar, we call it a planar support. A set of axis parallel rectangles $\mathcal{R}$ forms a non-piercing family if for any $R_1, R_2 \in \mathcal{R}$, $R_1 \setminus R_2$ is connected. Given a set $P$ of $n$ points in $\mathbb{R}^2$ and a set $\mathcal{R}$ of $m$ \emph{non-piercing} axis-aligned rectangles, we give an algorithm for computing a planar support for the hypergraph $(P,\mathcal{R})$ in $O(n\log^2 n + (n+m)\log m)$ time, where each $R\in\mathcal{R}$ defines a hyperedge consisting of all points of $P$ contained in~$R$. We use this result to show that if for a family of axis-parallel rectangles, any point in the plane is contained in at most $k$ pairwise \emph{crossing} rectangles (a pair of intersecting rectangles such that neither contains a corner of the other is called a crossing pair of rectangles), then we can obtain a support as the union of $k$ planar graphs.

ClassContrast: Bridging the Spatial and Contextual Gaps for Node Representations

from arXiv: Computational Geometry

Authors: Md Joshem Uddin, Astrit Tola, Varin Sikand, Cuneyt Gurcan Akcora, Baris Coskunuzer

Graph Neural Networks (GNNs) have revolutionized the domain of graph representation learning by utilizing neighborhood aggregation schemes in many popular architectures, such as message passing graph neural networks (MPGNNs). This scheme involves iteratively calculating a node's representation vector by aggregating and transforming the representation vectors of its adjacent nodes. Despite their effectiveness, MPGNNs face significant issues, such as oversquashing, oversmoothing, and underreaching, which hamper their effectiveness. Additionally, the reliance of MPGNNs on the homophily assumption, where edges typically connect nodes with similar labels and features, limits their performance in heterophilic contexts, where connected nodes often have significant differences. This necessitates the development of models that can operate effectively in both homophilic and heterophilic settings. In this paper, we propose a novel approach, ClassContrast, grounded in Energy Landscape Theory from Chemical Physics, to overcome these limitations. ClassContrast combines spatial and contextual information, leveraging a physics-inspired energy landscape to model node embeddings that are both discriminative and robust across homophilic and heterophilic settings. Our approach introduces contrast-based homophily matrices to enhance the understanding of class interactions and tendencies. Through extensive experiments, we demonstrate that ClassContrast outperforms traditional GNNs in node classification and link prediction tasks, proving its effectiveness and versatility in diverse real-world scenarios.

Authors: Md Joshem Uddin, Astrit Tola, Varin Sikand, Cuneyt Gurcan Akcora, Baris Coskunuzer

Graph Neural Networks (GNNs) have revolutionized the domain of graph representation learning by utilizing neighborhood aggregation schemes in many popular architectures, such as message passing graph neural networks (MPGNNs). This scheme involves iteratively calculating a node's representation vector by aggregating and transforming the representation vectors of its adjacent nodes. Despite their effectiveness, MPGNNs face significant issues, such as oversquashing, oversmoothing, and underreaching, which hamper their effectiveness. Additionally, the reliance of MPGNNs on the homophily assumption, where edges typically connect nodes with similar labels and features, limits their performance in heterophilic contexts, where connected nodes often have significant differences. This necessitates the development of models that can operate effectively in both homophilic and heterophilic settings. In this paper, we propose a novel approach, ClassContrast, grounded in Energy Landscape Theory from Chemical Physics, to overcome these limitations. ClassContrast combines spatial and contextual information, leveraging a physics-inspired energy landscape to model node embeddings that are both discriminative and robust across homophilic and heterophilic settings. Our approach introduces contrast-based homophily matrices to enhance the understanding of class interactions and tendencies. Through extensive experiments, we demonstrate that ClassContrast outperforms traditional GNNs in node classification and link prediction tasks, proving its effectiveness and versatility in diverse real-world scenarios.

General Conversion between ANCF and B-spline Surfaces

from arXiv: Computational Geometry

Authors: Randi Wang, Peng Lan, Zuqing Yu, Nianli Lu

In this paper, general conversion equations are derived between Absolute Nodal Coordinates Formulation (ANCF) finite surface elements and B-spline surfaces, an extension of our previous work on the conversion between ANCF cable elements and B-spline curves. The derivation of the conversion equations is the discovery of the geometric invariance of the ANCF displacement field before and after the conversion. Our study starts from proposing the conversion equation between ANCF finite surface elements and Bezier surfaces which are the special cases of B-spline surfaces, followed by establishing a general conversion equation between ANCF finite surface elements and Bezier surfaces. This general conversion equation has functionalities (1) to realize the one-step direct conversion between ANCF and Bezier surfaces (2) to convert ANCF finite surface elements directly to Bezier surfaces provided the ANCF nodal coordinates are not independent. The direct conversion from a conditional ANCF finite surface to Bezier surfaces enhances the efficiency and ability to control and store data in computers during the conversion process. The conversion between ANCF finite surface elements and B-spline surfaces is derived from a conversion of B-spline surfaces to a more general conversion of B-spline surfaces. B-spline basis functions are utilized in the non-recursive form, from which a more efficient conversion equation is obtained compared with an intuitive conversion semantics where one converts firstly B-spline surfaces to composite Bezier surfaces by inserting knot and converts to ANCF finite surface elements afterward. The obtained conversion equations between ANCF and B-spline surfaces realize the one-step direct conversion.

Authors: Randi Wang, Peng Lan, Zuqing Yu, Nianli Lu

In this paper, general conversion equations are derived between Absolute Nodal Coordinates Formulation (ANCF) finite surface elements and B-spline surfaces, an extension of our previous work on the conversion between ANCF cable elements and B-spline curves. The derivation of the conversion equations is the discovery of the geometric invariance of the ANCF displacement field before and after the conversion. Our study starts from proposing the conversion equation between ANCF finite surface elements and Bezier surfaces which are the special cases of B-spline surfaces, followed by establishing a general conversion equation between ANCF finite surface elements and Bezier surfaces. This general conversion equation has functionalities (1) to realize the one-step direct conversion between ANCF and Bezier surfaces (2) to convert ANCF finite surface elements directly to Bezier surfaces provided the ANCF nodal coordinates are not independent. The direct conversion from a conditional ANCF finite surface to Bezier surfaces enhances the efficiency and ability to control and store data in computers during the conversion process. The conversion between ANCF finite surface elements and B-spline surfaces is derived from a conversion of B-spline surfaces to a more general conversion of B-spline surfaces. B-spline basis functions are utilized in the non-recursive form, from which a more efficient conversion equation is obtained compared with an intuitive conversion semantics where one converts firstly B-spline surfaces to composite Bezier surfaces by inserting knot and converts to ANCF finite surface elements afterward. The obtained conversion equations between ANCF and B-spline surfaces realize the one-step direct conversion.

Influence of control polygon on the generalization of the conversion between ANCF and B-spline surfaces

from arXiv: Computational Geometry

Authors: Peng Lan, Randi Wang, Zuqing Yu

The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.

Authors: Peng Lan, Randi Wang, Zuqing Yu

The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.

Can You Link Up With Treewidth?

from arXiv: Data Structures and Algorithms

Authors: Radu Curticapean, Simon Döring, Daniel Neuen, Jiaheng Wang

A central result of Marx [ToC '10] proves that there are $k$-vertex graphs $H$ of maximum degree $3$ such that $n^{o(k /\log k)}$ time algorithms for detecting colorful $H$-subgraphs would refute the Exponential-Time Hypothesis (ETH). This result is widely used to obtain almost-tight conditional lower bounds for parameterized problems under ETH. Our first contribution is a new and fully self-contained proof of this result that further simplifies a recent work by Karthik et al. [SOSA 2024]. Towards this end, we introduce a novel graph parameter, the linkage capacity $\gamma(H)$, and show with an elementary proof that detecting colorful $H$-subgraphs in time $n^{o(\gamma(H))}$ refutes ETH. Then, we use a simple construction of communication networks credited to Bene\v{s} to obtain $k$-vertex graphs of maximum degree $3$ and linkage capacity $\Omega(k / \log k)$, avoiding the use of expander graphs. We also show that every graph $H$ of treewidth $t$ has linkage capacity $\Omega(t / \log t)$, thus recovering the stronger result of Marx [ToC '10] with a simplified proof. Additionally, we obtain new tight lower bounds for certain types of patterns by analyzing their linkage capacity. For example, we prove that almost all $k$-vertex graphs of polynomial average degree $\Omega(k^{\beta})$ for some $\beta > 0$ have linkage capacity $\Theta(k)$, which implies tight lower bounds for such patterns $H$. As an application of these results, we also obtain tight lower bounds for counting small induced subgraphs having a certain property $\Phi$, improving bounds from [Roth et al., FOCS 2020].

Authors: Radu Curticapean, Simon Döring, Daniel Neuen, Jiaheng Wang

A central result of Marx [ToC '10] proves that there are $k$-vertex graphs $H$ of maximum degree $3$ such that $n^{o(k /\log k)}$ time algorithms for detecting colorful $H$-subgraphs would refute the Exponential-Time Hypothesis (ETH). This result is widely used to obtain almost-tight conditional lower bounds for parameterized problems under ETH. Our first contribution is a new and fully self-contained proof of this result that further simplifies a recent work by Karthik et al. [SOSA 2024]. Towards this end, we introduce a novel graph parameter, the linkage capacity $\gamma(H)$, and show with an elementary proof that detecting colorful $H$-subgraphs in time $n^{o(\gamma(H))}$ refutes ETH. Then, we use a simple construction of communication networks credited to Bene\v{s} to obtain $k$-vertex graphs of maximum degree $3$ and linkage capacity $\Omega(k / \log k)$, avoiding the use of expander graphs. We also show that every graph $H$ of treewidth $t$ has linkage capacity $\Omega(t / \log t)$, thus recovering the stronger result of Marx [ToC '10] with a simplified proof. Additionally, we obtain new tight lower bounds for certain types of patterns by analyzing their linkage capacity. For example, we prove that almost all $k$-vertex graphs of polynomial average degree $\Omega(k^{\beta})$ for some $\beta > 0$ have linkage capacity $\Theta(k)$, which implies tight lower bounds for such patterns $H$. As an application of these results, we also obtain tight lower bounds for counting small induced subgraphs having a certain property $\Phi$, improving bounds from [Roth et al., FOCS 2020].

Characterizing and Testing Principal Minor Equivalence of Matrices

from arXiv: Data Structures and Algorithms

Authors: Abhranil Chatterjee, Sumanta Ghosh, Rohit Gurjar, Roshan Raj

Two matrices are said to be principal minor equivalent if they have equal corresponding principal minors of all orders. We give a characterization of principal minor equivalence and a deterministic polynomial time algorithm to check if two given matrices are principal minor equivalent. Earlier such results were known for certain special cases like symmetric matrices, skew-symmetric matrices with {0, 1, -1}-entries, and matrices with no cuts (i.e., for any non-trivial partition of the indices, the top right block or the bottom left block must have rank more than 1). As an immediate application, we get an algorithm to check if the determinantal point processes corresponding to two given kernel matrices (not necessarily symmetric) are the same. As another application, we give a deterministic polynomial-time test to check equality of two multivariate polynomials, each computed by a symbolic determinant with a rank 1 constraint on coefficient matrices.

Authors: Abhranil Chatterjee, Sumanta Ghosh, Rohit Gurjar, Roshan Raj

Two matrices are said to be principal minor equivalent if they have equal corresponding principal minors of all orders. We give a characterization of principal minor equivalence and a deterministic polynomial time algorithm to check if two given matrices are principal minor equivalent. Earlier such results were known for certain special cases like symmetric matrices, skew-symmetric matrices with {0, 1, -1}-entries, and matrices with no cuts (i.e., for any non-trivial partition of the indices, the top right block or the bottom left block must have rank more than 1). As an immediate application, we get an algorithm to check if the determinantal point processes corresponding to two given kernel matrices (not necessarily symmetric) are the same. As another application, we give a deterministic polynomial-time test to check equality of two multivariate polynomials, each computed by a symbolic determinant with a rank 1 constraint on coefficient matrices.

When is local search both effective and efficient?

from arXiv: Data Structures and Algorithms

Authors: Artem Kaznatcheev, Sofia Vazquez Alferez

Combinatorial optimization problems define fitness landscapes that combine the numerics of the 'fitness' function to be maximized with the combinatorics of which assignments are adjacent. Local search starts at an initial assignment in this landscape and successively moves to assignments until no further improvement is possible among the adjacent assignments. Classic analyses of local search algorithms have focused mostly on the question of effectiveness ("did the algorithm find a good solution?") and often implicitly assumed that there are no doubts about their efficiency ("did the algorithm find the solution quickly?"). But there are many reasons to doubt the efficiency of local search. Many local search algorithms are known to be inefficient even if we focus on fitness landscapes on the hypercube that are single peaked on every subcube (known as semismooth fitness landscapes, completely unimodal pseudo-Boolean functions, or acyclic unique sink orientations). Here, we want to identify the most expressive subclass of single-peaked binary Boolean valued constraint satisfaction problems for which many popular local search algorithms are efficient. In this paper, we introduce the class of conditionally-smooth fitness landscapes where the preferred assignment of a variable xj depends only on the assignments of variables xi with i less than j in an associated partial order. We prove that many popular local search algorithms like random ascent, simulated annealing, various jumping rules, and the Kernighan-Lin heuristic are very efficient on conditionally-smooth landscapes. Some other popular local search algorithms like steepest ascent and random facet, however, still require a super-polynomial number of steps on these landscapes. Our hope is to contribute to a fuller understanding of what properties fitness landscapes must have for local search algorithms to be both effective and efficient.

Authors: Artem Kaznatcheev, Sofia Vazquez Alferez

Combinatorial optimization problems define fitness landscapes that combine the numerics of the 'fitness' function to be maximized with the combinatorics of which assignments are adjacent. Local search starts at an initial assignment in this landscape and successively moves to assignments until no further improvement is possible among the adjacent assignments. Classic analyses of local search algorithms have focused mostly on the question of effectiveness ("did the algorithm find a good solution?") and often implicitly assumed that there are no doubts about their efficiency ("did the algorithm find the solution quickly?"). But there are many reasons to doubt the efficiency of local search. Many local search algorithms are known to be inefficient even if we focus on fitness landscapes on the hypercube that are single peaked on every subcube (known as semismooth fitness landscapes, completely unimodal pseudo-Boolean functions, or acyclic unique sink orientations). Here, we want to identify the most expressive subclass of single-peaked binary Boolean valued constraint satisfaction problems for which many popular local search algorithms are efficient. In this paper, we introduce the class of conditionally-smooth fitness landscapes where the preferred assignment of a variable xj depends only on the assignments of variables xi with i less than j in an associated partial order. We prove that many popular local search algorithms like random ascent, simulated annealing, various jumping rules, and the Kernighan-Lin heuristic are very efficient on conditionally-smooth landscapes. Some other popular local search algorithms like steepest ascent and random facet, however, still require a super-polynomial number of steps on these landscapes. Our hope is to contribute to a fuller understanding of what properties fitness landscapes must have for local search algorithms to be both effective and efficient.

Expected Maximin Fairness in Max-Cut and other Combinatorial Optimization Problems

from arXiv: Data Structures and Algorithms

Authors: Jad Salem, Reuben Tate, Stephan Eidenbenz

Maximin fairness is the ideal that the worst-off group (or individual) should be treated as well as possible. Literature on maximin fairness in various decision-making settings has grown in recent years, but theoretical results are sparse. In this paper, we explore the challenges inherent to maximin fairness in combinatorial optimization. We begin by showing that (1) optimal maximin-fair solutions are bounded by non-maximin-fair optimal solutions, and (2) stochastic maximin-fair solutions exceed their deterministic counterparts in expectation for a broad class of combinatorial optimization problems. In the remainder of the paper, we use the special case of Max-Cut to demonstrate challenges in defining and implementing maximin fairness.

Authors: Jad Salem, Reuben Tate, Stephan Eidenbenz

Maximin fairness is the ideal that the worst-off group (or individual) should be treated as well as possible. Literature on maximin fairness in various decision-making settings has grown in recent years, but theoretical results are sparse. In this paper, we explore the challenges inherent to maximin fairness in combinatorial optimization. We begin by showing that (1) optimal maximin-fair solutions are bounded by non-maximin-fair optimal solutions, and (2) stochastic maximin-fair solutions exceed their deterministic counterparts in expectation for a broad class of combinatorial optimization problems. In the remainder of the paper, we use the special case of Max-Cut to demonstrate challenges in defining and implementing maximin fairness.

GORAM: Graph-oriented ORAM for Efficient Ego-centric Queries on Federated Graphs

from arXiv: Data Structures and Algorithms

Authors: Xiaoyu Fan, Kun Chen, Jiping Yu, Xiaowei Zhu, Yunyi Chen, Huanchen Zhang, Wei Xu

Ego-centric queries, focusing on a target vertex and its direct neighbors, are essential for various applications. Enabling such queries on graphs owned by mutually distrustful data providers, without breaching privacy, holds promise for more comprehensive results. In this paper, we propose GORAM, a graph-oriented data structure that enables efficient ego-centric queries on federated graphs with strong privacy guarantees. GORAM is built upon secure multi-party computation (MPC) and ensures that no single party can learn any sensitive information about the graph data or the querying keys during the process. However, achieving practical performance with privacy guaranteed presents a challenge. To overcome this, GORAM is designed to partition the federated graph and construct an Oblivious RAM(ORAM)-inspired index atop these partitions. This design enables each ego-centric query to process only a single partition, which can be accessed fast and securely. To evaluate the performance of GORAM, we developed a prototype querying engine on a real-world MPC framework. We conduct a comprehensive evaluation with five commonly used queries on both synthetic and real-world graphs. Our evaluation shows that all benchmark queries can be completed in just 58.1 milliseconds to 35.7 seconds, even on graphs with up to 41.6 million vertices and 1.4 billion edges. To the best of our knowledge, this represents the first instance of processing billion-scale graphs with practical performance on MPC.

Authors: Xiaoyu Fan, Kun Chen, Jiping Yu, Xiaowei Zhu, Yunyi Chen, Huanchen Zhang, Wei Xu

Ego-centric queries, focusing on a target vertex and its direct neighbors, are essential for various applications. Enabling such queries on graphs owned by mutually distrustful data providers, without breaching privacy, holds promise for more comprehensive results. In this paper, we propose GORAM, a graph-oriented data structure that enables efficient ego-centric queries on federated graphs with strong privacy guarantees. GORAM is built upon secure multi-party computation (MPC) and ensures that no single party can learn any sensitive information about the graph data or the querying keys during the process. However, achieving practical performance with privacy guaranteed presents a challenge. To overcome this, GORAM is designed to partition the federated graph and construct an Oblivious RAM(ORAM)-inspired index atop these partitions. This design enables each ego-centric query to process only a single partition, which can be accessed fast and securely. To evaluate the performance of GORAM, we developed a prototype querying engine on a real-world MPC framework. We conduct a comprehensive evaluation with five commonly used queries on both synthetic and real-world graphs. Our evaluation shows that all benchmark queries can be completed in just 58.1 milliseconds to 35.7 seconds, even on graphs with up to 41.6 million vertices and 1.4 billion edges. To the best of our knowledge, this represents the first instance of processing billion-scale graphs with practical performance on MPC.

On the Resilience of Fast Failover Routing Against Dynamic Link Failures

from arXiv: Data Structures and Algorithms

Authors: Wenkai Dai, Klaus-Tycho Foerster, Stefan Schmid

Modern communication networks feature local fast failover mechanisms in the data plane, to swiftly respond to link failures with pre-installed rerouting rules. This paper explores resilient routing meant to tolerate $\leq k$ simultaneous link failures, ensuring packet delivery, provided that the source and destination remain connected. While past theoretical works studied failover routing under static link failures, i.e., links which permanently and simultaneously fail, real-world networks often face link flapping--dynamic down states caused by, e.g., numerous short-lived software-related faults. Thus, in this initial work, we re-investigate the resilience of failover routing against link flapping, by categorizing link failures into static, semi-dynamic (removing the assumption that links fail simultaneously), and dynamic (removing the assumption that links fail permanently) types, shedding light on the capabilities and limitations of failover routing under these scenarios. We show that $k$-edge-connected graphs exhibit $(k-1)$-resilient routing against dynamic failures for $k \leq 5$. We further show that this result extends to arbitrary $k$ if it is possible to rewrite $\log k$ bits in the packet header. Rewriting $3$ bits suffices to cope with $k$ semi-dynamic failures. However, on general graphs, tolerating $2$ dynamic failures becomes impossible without bit-rewriting. Even by rewriting $\log k$ bits, resilient routing cannot resolve $k$ dynamic failures, demonstrating the limitation of local fast rerouting.

Authors: Wenkai Dai, Klaus-Tycho Foerster, Stefan Schmid

Modern communication networks feature local fast failover mechanisms in the data plane, to swiftly respond to link failures with pre-installed rerouting rules. This paper explores resilient routing meant to tolerate $\leq k$ simultaneous link failures, ensuring packet delivery, provided that the source and destination remain connected. While past theoretical works studied failover routing under static link failures, i.e., links which permanently and simultaneously fail, real-world networks often face link flapping--dynamic down states caused by, e.g., numerous short-lived software-related faults. Thus, in this initial work, we re-investigate the resilience of failover routing against link flapping, by categorizing link failures into static, semi-dynamic (removing the assumption that links fail simultaneously), and dynamic (removing the assumption that links fail permanently) types, shedding light on the capabilities and limitations of failover routing under these scenarios. We show that $k$-edge-connected graphs exhibit $(k-1)$-resilient routing against dynamic failures for $k \leq 5$. We further show that this result extends to arbitrary $k$ if it is possible to rewrite $\log k$ bits in the packet header. Rewriting $3$ bits suffices to cope with $k$ semi-dynamic failures. However, on general graphs, tolerating $2$ dynamic failures becomes impossible without bit-rewriting. Even by rewriting $\log k$ bits, resilient routing cannot resolve $k$ dynamic failures, demonstrating the limitation of local fast rerouting.

Thursday, October 03

Inverse Problems

from Ben Recht

A topic so cool it doesn't need a catchy title.

This is the live blog of Lecture 11 of my graduate class “Convex Optimization.” A Table of Contents is here.

The most underappreciated part of Boyd & Vandenberghe is the endless trove of examples. Three chapters of the book are dedicated to modeling examples, and we’re now moving into the part of the course where we apply the theory to modeling.

We start with a topic dear to my heart: linear inverse problems. I’m casting a wide net with this term here.1 Let’s skip the linear part for a second. What do I mean by an inverse problem? Imaging is perhaps the easiest problem to describe. We imagine some 2D or 3D representation of the world we’d like to capture through a measurement device and want to use what we know about our measurements to build the image.

In a CT scan, we send lines of X-rays through your body and then use a computer to decode an image of your insides. MRI is similar, except it uses complex measurements of the magnetic fields induced by molecules in your body. Neither of these modalities takes “images” in the way we think a camera does, yet we can use algorithms to decode what’s inside. These are canonical inverse problems. We measure the state of the world through some process, and we’d like to determine the state from our measurements.

Even cameras solve inverse problems these days. Algorithms in your phone can take multiple blurry snapshots and use algorithms to make them into a deblurred image. Similarly, multiple images can be combined to yield high dynamic range when the lighting is unfavorable.

All of these modalities combine a forward model, which simulates the physics of what happens between the world and the imaging sensor, and an image model, which builds in what we think should be true about our final image. Algorithms then combine these together to produce a final reconstruction.2

Surprisingly, all of the examples above can be computed using linear models. The mapping from the world to the measurement is a linear function, insofar as if you “add stuff,” the resulting measurement is their sum. For example, the measured output in a CT scan is the amount of X-rays that made it through the body. This decreases linearly in the amount of absorbing material between the X-ray emitter and detector. An image blur is a linear operation that adds together multiple views of a scene. A linear inverse problem is one where the mapping from reality to our measurement is modelable as a linear transformation.

There are surprisingly many problems that can be posed as linear inverse problems. A few examples include finding a model of a dynamical system that predicts time series data, estimating biomarkers relevant to particular disease outcomes, and finding the delays in GPS signals so that you can triangulate and find your location. It’s popular to use linear maps to model your content preferences based on your past consumption behavior and all of the consumption behavior of everyone else on the internet. I could do a whole blog series on inverse problems and their applications. For this course, we’ll settle with one post.

If we have a linear model, we are saying

measurement = forward model ( image ) + noise

Where the “forward model” function is a linear mapping. Our goal is to figure out what the image is by removing the noise and inverting the forward model. The problem here is that there is an infinite set of images and noises that produce the same measurement. What do you attribute to noise? What do you attribute to signal? The answer is to come up with some cost function that balances attribution. The general linear inverse problem is then given by an optimization problem of the form

minimize (1-h) * plausibility(noise) + h * plausibility(image)
subject to measurement = forward model (image) + noise

where h is a constant between 0 and 1. The plausibility functions are different for the noise and the image and are part of the modeling process. You want to pick functions that are easy to optimize (hint: convex) and large for implausible realizations of the signals you care about. If you were Bayesian, you might call these priors about the signals in your problem. But prior is a loaded word in Bayesian land, implying things about probabilities and whatnot. Today, I’m going to stick with calling them plausibility functions to avoid statistical mess.

The most common plausibility functions we use date back to Gauss. One example is the mean-sum-of-squares, which captures some notion of a signal’s variability. Another plausibility penalty would be the sum of absolute deviations. This plausibility function is useful when you expect the noise process to be bursty or sparse. 

Similarly, different convex plausibility functions encourage particular image structures. Sparse images are encouraged by penalizing the sum of absolute values. Piecewise constant images are encouraged by penalizing the sum of absolute values of the derivative. Smooth images are encouraged by penalizing the sum of the squares of the second derivative.

The modeling choices here are intimidating as it’s not clear what plausibility penalties you should pick. But there’s a rule of thumb that I’ve found helpful in linear inverse problems (and which I may have written a dozen or so papers about). The plausibility function is trying to encourage your algorithm to choose a simple signal. One means of simple is “the signal can be written as a short sum of basic signals.” For example, you might assume your signal is just a spike train, then it can be written as a sum of spikes. Sparse images are a sum of a few basic point sources. A low rank matrix is a sum of a few rank one matrices. In all of these examples, the assumption is that the signal is a sum of a few atoms. If you know the atoms, a reasonable plausibility function penalizes the coefficients needed to write a signal as a sum of atoms. In math, for those who like equations:

When the atoms are unit vectors, this is the Euclidean norm (root mean squared error). When the atoms are standard basis vectors, this is the sum of absolute values (l1-norm). There are tons of other options. No matter which atomic set you choose, this plausibility function is convex. The general linear inverse problem is thus posed as a convex optimization problem. This gives you a general cookbook for posing and solving inverse problems.3

Now what about that constant h? It is called a hyperparameter and is part of your convex model. Sometimes you know what that parameter should be, but other times you need to figure it out experimentally. But remember, this whole model is made up, so don’t fret if there are parts you can’t specify by pure reason alone.

Subscribe now

1

I wish instead of having to use the dumb terms “machine learning” or “artificial intelligence,” we could say, “I work on inverse problems.” If only that carried the same cachet. Maybe I’ll work on improving our field’s marketing.

2

Today, I’m going to skip over the philosophy of why we’ve decided to trust these algorithmic reconstructions of reality. Why do we believe that what we see in an MRI image is really what’s inside a body? A fun post for another day!

3

If you want to read more about atomic decompositions and inverse problems, one of my favorite papers on my CV, The Convex Geometry of Linear Inverse Problems, with Chandrasekaran, Parrilo, and Willsky, describes this general view and its implications.

By Ben Recht

Theory at the Institute and Beyond, October 2024

from Simons Institute Blog

by Nikhil Srivastava (senior scientist, Simons Institute for the Theory of Computing) Some results mark the end of a long quest, whereas others open up new worlds for exploration. Then there are works that change our perspective and make the … Continue reading →

By Simons Institute Editor

by Nikhil Srivastava (senior scientist, Simons Institute for the Theory of Computing) Some results mark the end of a long quest, whereas others open up new worlds for exploration. Then there are works that change our perspective and make the … Continue reading

By Simons Institute Editor

Optimization of a Quantum Subset Sum Oracle

from arXiv: Computational Complexity

Authors: Angelo Benoit, Sam Schwartz, Ron K. Cytron

We investigate the implementation of an oracle for the Subset Sum problem for quantum search using Grover's algorithm. Our work concerns reducing the number of qubits, gates, and multi-controlled gates required by the oracle. We describe the compilation of a Subset Sum instance into a quantum oracle, using a Python library we developed for Qiskit and have published in GitHub. We then present techniques to conserve qubits and gates along with experiments showing their effectiveness on random instances of Subset Sum. These techniques include moving from fixed to varying-width arithmetic, using partial sums of a set's integers to determine specific integer widths, and sorting the set to obtain provably the most efficient partial sums. We present a new method for computing bit-string comparisons that avoids arbitrarily large multiple-control gates, and we introduce a simple modification to the oracle that allows for approximate solutions to the Subset Sum problem via Grover search.

Authors: Angelo Benoit, Sam Schwartz, Ron K. Cytron

We investigate the implementation of an oracle for the Subset Sum problem for quantum search using Grover's algorithm. Our work concerns reducing the number of qubits, gates, and multi-controlled gates required by the oracle. We describe the compilation of a Subset Sum instance into a quantum oracle, using a Python library we developed for Qiskit and have published in GitHub. We then present techniques to conserve qubits and gates along with experiments showing their effectiveness on random instances of Subset Sum. These techniques include moving from fixed to varying-width arithmetic, using partial sums of a set's integers to determine specific integer widths, and sorting the set to obtain provably the most efficient partial sums. We present a new method for computing bit-string comparisons that avoids arbitrarily large multiple-control gates, and we introduce a simple modification to the oracle that allows for approximate solutions to the Subset Sum problem via Grover search.

PROXI: Challenging the GNNs for Link Prediction

from arXiv: Computational Geometry

Authors: Astrit Tola, Jack Myrick, Baris Coskunuzer

Over the past decade, Graph Neural Networks (GNNs) have transformed graph representation learning. In the widely adopted message-passing GNN framework, nodes refine their representations by aggregating information from neighboring nodes iteratively. While GNNs excel in various domains, recent theoretical studies have raised concerns about their capabilities. GNNs aim to address various graph-related tasks by utilizing such node representations, however, this one-size-fits-all approach proves suboptimal for diverse tasks. Motivated by these observations, we conduct empirical tests to compare the performance of current GNN models with more conventional and direct methods in link prediction tasks. Introducing our model, PROXI, which leverages proximity information of node pairs in both graph and attribute spaces, we find that standard machine learning (ML) models perform competitively, even outperforming cutting-edge GNN models when applied to these proximity metrics derived from node neighborhoods and attributes. This holds true across both homophilic and heterophilic networks, as well as small and large benchmark datasets, including those from the Open Graph Benchmark (OGB). Moreover, we show that augmenting traditional GNNs with PROXI significantly boosts their link prediction performance. Our empirical findings corroborate the previously mentioned theoretical observations and imply that there exists ample room for enhancement in current GNN models to reach their potential.

Authors: Astrit Tola, Jack Myrick, Baris Coskunuzer

Over the past decade, Graph Neural Networks (GNNs) have transformed graph representation learning. In the widely adopted message-passing GNN framework, nodes refine their representations by aggregating information from neighboring nodes iteratively. While GNNs excel in various domains, recent theoretical studies have raised concerns about their capabilities. GNNs aim to address various graph-related tasks by utilizing such node representations, however, this one-size-fits-all approach proves suboptimal for diverse tasks. Motivated by these observations, we conduct empirical tests to compare the performance of current GNN models with more conventional and direct methods in link prediction tasks. Introducing our model, PROXI, which leverages proximity information of node pairs in both graph and attribute spaces, we find that standard machine learning (ML) models perform competitively, even outperforming cutting-edge GNN models when applied to these proximity metrics derived from node neighborhoods and attributes. This holds true across both homophilic and heterophilic networks, as well as small and large benchmark datasets, including those from the Open Graph Benchmark (OGB). Moreover, we show that augmenting traditional GNNs with PROXI significantly boosts their link prediction performance. Our empirical findings corroborate the previously mentioned theoretical observations and imply that there exists ample room for enhancement in current GNN models to reach their potential.

Recovering Manifold Structure Using Ollivier-Ricci Curvature

from arXiv: Computational Geometry

Authors: Tristan Luca Saidi, Abigail Hickok, Andrew J. Blumberg

We introduce ORC-ManL, a new algorithm to prune spurious edges from nearest neighbor graphs using a criterion based on Ollivier-Ricci curvature and estimated metric distortion. Our motivation comes from manifold learning: we show that when the data generating the nearest-neighbor graph consists of noisy samples from a low-dimensional manifold, edges that shortcut through the ambient space have more negative Ollivier-Ricci curvature than edges that lie along the data manifold. We demonstrate that our method outperforms alternative pruning methods and that it significantly improves performance on many downstream geometric data analysis tasks that use nearest neighbor graphs as input. Specifically, we evaluate on manifold learning, persistent homology, dimension estimation, and others. We also show that ORC-ManL can be used to improve clustering and manifold learning of single-cell RNA sequencing data. Finally, we provide empirical convergence experiments that support our theoretical findings.

Authors: Tristan Luca Saidi, Abigail Hickok, Andrew J. Blumberg

We introduce ORC-ManL, a new algorithm to prune spurious edges from nearest neighbor graphs using a criterion based on Ollivier-Ricci curvature and estimated metric distortion. Our motivation comes from manifold learning: we show that when the data generating the nearest-neighbor graph consists of noisy samples from a low-dimensional manifold, edges that shortcut through the ambient space have more negative Ollivier-Ricci curvature than edges that lie along the data manifold. We demonstrate that our method outperforms alternative pruning methods and that it significantly improves performance on many downstream geometric data analysis tasks that use nearest neighbor graphs as input. Specifically, we evaluate on manifold learning, persistent homology, dimension estimation, and others. We also show that ORC-ManL can be used to improve clustering and manifold learning of single-cell RNA sequencing data. Finally, we provide empirical convergence experiments that support our theoretical findings.

Approximating Klee's Measure Problem and a Lower Bound for Union Volume Estimation

from arXiv: Data Structures and Algorithms

Authors: Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, Yanheng Wang

Union volume estimation is a classical algorithmic problem. Given a family of objects $O_1,\ldots,O_n \subseteq \mathbb{R}^d$, we want to approximate the volume of their union. In the special case where all objects are boxes (also known as hyperrectangles) this is known as Klee's measure problem. The state-of-the-art algorithm [Karp, Luby, Madras '89] for union volume estimation and Klee's measure problem in constant dimension $d$ computes a $(1+\varepsilon)$-approximation with constant success probability by using a total of $O(n/\varepsilon^2)$ queries of the form (i) ask for the volume of $O_i$, (ii) sample a point uniformly at random from $O_i$, and (iii) query whether a given point is contained in $O_i$. We show that if one can only interact with the objects via the aforementioned three queries, the query complexity of [Karp, Luby, Madras '89] is indeed optimal, i.e., $\Omega(n/\varepsilon^2)$ queries are necessary. Our lower bound already holds for estimating the union of equiponderous axis-aligned polygons in $\mathbb{R}^2$, and even if the algorithm is allowed to inspect the coordinates of the points sampled from the polygons, and still holds when a containment query can ask containment of an arbitrary (not sampled) point. Guided by the insights of the lower bound, we provide a more efficient approximation algorithm for Klee's measure problem improving the $O(n/\varepsilon^2)$ time to $O((n+\frac{1}{\varepsilon^2}) \cdot \log^{O(d)}n)$. We achieve this improvement by exploiting the geometry of Klee's measure problem in various ways: (1) Since we have access to the boxes' coordinates, we can split the boxes into classes of boxes of similar shape. (2) Within each class, we show how to sample from the union of all boxes, by using orthogonal range searching. And (3) we exploit that boxes of different classes have small intersection, for most pairs of classes.

Authors: Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, Yanheng Wang

Union volume estimation is a classical algorithmic problem. Given a family of objects $O_1,\ldots,O_n \subseteq \mathbb{R}^d$, we want to approximate the volume of their union. In the special case where all objects are boxes (also known as hyperrectangles) this is known as Klee's measure problem. The state-of-the-art algorithm [Karp, Luby, Madras '89] for union volume estimation and Klee's measure problem in constant dimension $d$ computes a $(1+\varepsilon)$-approximation with constant success probability by using a total of $O(n/\varepsilon^2)$ queries of the form (i) ask for the volume of $O_i$, (ii) sample a point uniformly at random from $O_i$, and (iii) query whether a given point is contained in $O_i$. We show that if one can only interact with the objects via the aforementioned three queries, the query complexity of [Karp, Luby, Madras '89] is indeed optimal, i.e., $\Omega(n/\varepsilon^2)$ queries are necessary. Our lower bound already holds for estimating the union of equiponderous axis-aligned polygons in $\mathbb{R}^2$, and even if the algorithm is allowed to inspect the coordinates of the points sampled from the polygons, and still holds when a containment query can ask containment of an arbitrary (not sampled) point. Guided by the insights of the lower bound, we provide a more efficient approximation algorithm for Klee's measure problem improving the $O(n/\varepsilon^2)$ time to $O((n+\frac{1}{\varepsilon^2}) \cdot \log^{O(d)}n)$. We achieve this improvement by exploiting the geometry of Klee's measure problem in various ways: (1) Since we have access to the boxes' coordinates, we can split the boxes into classes of boxes of similar shape. (2) Within each class, we show how to sample from the union of all boxes, by using orthogonal range searching. And (3) we exploit that boxes of different classes have small intersection, for most pairs of classes.

Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning

from arXiv: Data Structures and Algorithms

Authors: Artur Back de Luca, George Giapitzakis, Shenghao Yang, Petar Veličković, Kimon Fountoulakis

There has been a growing interest in the ability of neural networks to solve algorithmic tasks, such as arithmetic, summary statistics, and sorting. While state-of-the-art models like Transformers have demonstrated good generalization performance on in-distribution tasks, their out-of-distribution (OOD) performance is poor when trained end-to-end. In this paper, we focus on value generalization, a common instance of OOD generalization where the test distribution has the same input sequence length as the training distribution, but the value ranges in the training and test distributions do not necessarily overlap. To address this issue, we propose that using fixed positional encodings to determine attention weights-referred to as positional attention-enhances empirical OOD performance while maintaining expressivity. We support our claim about expressivity by proving that Transformers with positional attention can effectively simulate parallel algorithms.

Authors: Artur Back de Luca, George Giapitzakis, Shenghao Yang, Petar Veličković, Kimon Fountoulakis

There has been a growing interest in the ability of neural networks to solve algorithmic tasks, such as arithmetic, summary statistics, and sorting. While state-of-the-art models like Transformers have demonstrated good generalization performance on in-distribution tasks, their out-of-distribution (OOD) performance is poor when trained end-to-end. In this paper, we focus on value generalization, a common instance of OOD generalization where the test distribution has the same input sequence length as the training distribution, but the value ranges in the training and test distributions do not necessarily overlap. To address this issue, we propose that using fixed positional encodings to determine attention weights-referred to as positional attention-enhances empirical OOD performance while maintaining expressivity. We support our claim about expressivity by proving that Transformers with positional attention can effectively simulate parallel algorithms.

Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians

from arXiv: Data Structures and Algorithms

Authors: Jane H. Lee, Anay Mehrotra, Manolis Zampetakis

We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set $S \subseteq \mathbb{R}^d$. Kontonis, Tzamos, and Zampetakis (FOCS'19) gave a $d^{\mathrm{poly}(1/\varepsilon)}$ time algorithm for finding $\varepsilon$-accurate parameters for the special case of Gaussian distributions with diagonal covariance matrix. Recently, Diakonikolas, Kane, Pittas, and Zarifis (COLT'24) showed that this exponential dependence on $1/\varepsilon$ is necessary even when $S$ belongs to some well-behaved classes. These works leave the following open problems which we address in this work: Can we estimate the parameters of any Gaussian or even extend beyond Gaussians? Can we design $\mathrm{poly}(d/\varepsilon)$ time algorithms when $S$ is a simple set such as a halfspace? We make progress on both of these questions by providing the following results: 1. Toward the first question, we give a $d^{\mathrm{poly}(\ell/\varepsilon)}$ time algorithm for any exponential family that satisfies some structural assumptions and any unknown set $S$ that is $\varepsilon$-approximable by degree-$\ell$ polynomials. This result has two important applications: 1a) The first algorithm for estimating arbitrary Gaussian distributions from samples truncated to an unknown $S$; and 1b) The first algorithm for linear regression with unknown truncation and Gaussian features. 2. To address the second question, we provide an algorithm with runtime $\mathrm{poly}(d/\varepsilon)$ that works for a set of exponential families (containing all Gaussians) when $S$ is a halfspace or an axis-aligned rectangle. Along the way, we develop tools that may be of independent interest, including, a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples that is robust to certain covariate shifts.

Authors: Jane H. Lee, Anay Mehrotra, Manolis Zampetakis

We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set $S \subseteq \mathbb{R}^d$. Kontonis, Tzamos, and Zampetakis (FOCS'19) gave a $d^{\mathrm{poly}(1/\varepsilon)}$ time algorithm for finding $\varepsilon$-accurate parameters for the special case of Gaussian distributions with diagonal covariance matrix. Recently, Diakonikolas, Kane, Pittas, and Zarifis (COLT'24) showed that this exponential dependence on $1/\varepsilon$ is necessary even when $S$ belongs to some well-behaved classes. These works leave the following open problems which we address in this work: Can we estimate the parameters of any Gaussian or even extend beyond Gaussians? Can we design $\mathrm{poly}(d/\varepsilon)$ time algorithms when $S$ is a simple set such as a halfspace? We make progress on both of these questions by providing the following results: 1. Toward the first question, we give a $d^{\mathrm{poly}(\ell/\varepsilon)}$ time algorithm for any exponential family that satisfies some structural assumptions and any unknown set $S$ that is $\varepsilon$-approximable by degree-$\ell$ polynomials. This result has two important applications: 1a) The first algorithm for estimating arbitrary Gaussian distributions from samples truncated to an unknown $S$; and 1b) The first algorithm for linear regression with unknown truncation and Gaussian features. 2. To address the second question, we provide an algorithm with runtime $\mathrm{poly}(d/\varepsilon)$ that works for a set of exponential families (containing all Gaussians) when $S$ is a halfspace or an axis-aligned rectangle. Along the way, we develop tools that may be of independent interest, including, a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples that is robust to certain covariate shifts.

Theoretical Lower Bounds for the Oven Scheduling Problem

from arXiv: Data Structures and Algorithms

Authors: Francesca Da Ros, Marie-Louise Lackner, Nysret Musliu

The Oven Scheduling Problem (OSP) is an NP-hard real-world parallel batch scheduling problem arising in the semiconductor industry. The objective of the problem is to schedule a set of jobs on ovens while minimizing several factors, namely total oven runtime, job tardiness, and setup costs. At the same time, it must adhere to various constraints such as oven eligibility and availability, job release dates, setup times between batches, and oven capacity limitations. The key to obtaining efficient schedules is to process compatible jobs simultaneously in batches. In this paper, we develop theoretical, problem-specific lower bounds for the OSP that can be computed very quickly. We thoroughly examine these lower bounds, evaluating their quality and exploring their integration into existing solution methods. Specifically, we investigate their contribution to exact methods and a metaheuristic local search approach using simulated annealing. Moreover, these problem-specific lower bounds enable us to assess the solution quality for large instances for which exact methods often fail to provide tight lower bounds.

Authors: Francesca Da Ros, Marie-Louise Lackner, Nysret Musliu

The Oven Scheduling Problem (OSP) is an NP-hard real-world parallel batch scheduling problem arising in the semiconductor industry. The objective of the problem is to schedule a set of jobs on ovens while minimizing several factors, namely total oven runtime, job tardiness, and setup costs. At the same time, it must adhere to various constraints such as oven eligibility and availability, job release dates, setup times between batches, and oven capacity limitations. The key to obtaining efficient schedules is to process compatible jobs simultaneously in batches. In this paper, we develop theoretical, problem-specific lower bounds for the OSP that can be computed very quickly. We thoroughly examine these lower bounds, evaluating their quality and exploring their integration into existing solution methods. Specifically, we investigate their contribution to exact methods and a metaheuristic local search approach using simulated annealing. Moreover, these problem-specific lower bounds enable us to assess the solution quality for large instances for which exact methods often fail to provide tight lower bounds.

Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate

from arXiv: Data Structures and Algorithms

Authors: Xiaoyu Li, Jie Shen

Understanding noise tolerance of learning algorithms under certain conditions is a central quest in learning theory. In this work, we study the problem of computationally efficient PAC learning of halfspaces in the presence of malicious noise, where an adversary can corrupt both instances and labels of training samples. The best-known noise tolerance either depends on a target error rate under distributional assumptions or on a margin parameter under large-margin conditions. In this work, we show that when both types of conditions are satisfied, it is possible to achieve {\em constant} noise tolerance by minimizing a reweighted hinge loss. Our key ingredients include: 1) an efficient algorithm that finds weights to control the gradient deterioration from corrupted samples, and 2) a new analysis on the robustness of the hinge loss equipped with such weights.

Authors: Xiaoyu Li, Jie Shen

Understanding noise tolerance of learning algorithms under certain conditions is a central quest in learning theory. In this work, we study the problem of computationally efficient PAC learning of halfspaces in the presence of malicious noise, where an adversary can corrupt both instances and labels of training samples. The best-known noise tolerance either depends on a target error rate under distributional assumptions or on a margin parameter under large-margin conditions. In this work, we show that when both types of conditions are satisfied, it is possible to achieve {\em constant} noise tolerance by minimizing a reweighted hinge loss. Our key ingredients include: 1) an efficient algorithm that finds weights to control the gradient deterioration from corrupted samples, and 2) a new analysis on the robustness of the hinge loss equipped with such weights.

Low depth amplitude estimation without really trying

from arXiv: Data Structures and Algorithms

Authors: Dinh-Long Vu, Bin Cheng, Patrick Rebentrost

Standard quantum amplitude estimation algorithms provide quadratic speedup to Monte-Carlo simulations but require a circuit depth that scales as inverse of the estimation error. In view of the shallow depth in near-term devices, the precision achieved by these algorithms would be low. In this paper we bypass this limitation by performing the classical Monte-Carlo method on the quantum algorithm itself, achieving a higher than classical precision using low-depth circuits. We require the quantum algorithm to be weakly biased in order to avoid error accumulation during this process. Our method is parallel and can be as weakly biased as the constituent algorithm in some cases.

Authors: Dinh-Long Vu, Bin Cheng, Patrick Rebentrost

Standard quantum amplitude estimation algorithms provide quadratic speedup to Monte-Carlo simulations but require a circuit depth that scales as inverse of the estimation error. In view of the shallow depth in near-term devices, the precision achieved by these algorithms would be low. In this paper we bypass this limitation by performing the classical Monte-Carlo method on the quantum algorithm itself, achieving a higher than classical precision using low-depth circuits. We require the quantum algorithm to be weakly biased in order to avoid error accumulation during this process. Our method is parallel and can be as weakly biased as the constituent algorithm in some cases.

The Telephone $k$-Multicast Problem

from arXiv: Data Structures and Algorithms

Authors: Daniel Hathcock, Guy Kortsarz, R. Ravi

We consider minimum time multicasting problems in directed and undirected graphs: given a root node and a subset of $t$ terminal nodes, multicasting seeks to find the minimum number of rounds within which all terminals can be informed with a message originating at the root. In each round, the telephone model we study allows the information to move via a matching from the informed nodes to the uninformed nodes. Since minimum time multicasting in digraphs is poorly understood compared to the undirected variant, we study an intermediate problem in undirected graphs that specifies a target $k < t$, and requires the only $k$ of the terminals be informed in the minimum number of rounds. For this problem, we improve implications of prior results and obtain an $\tilde{O}(t^{1/3})$ multiplicative approximation. For the directed version, we obtain an {\em additive} $\tilde{O}(k^{1/2})$ approximation algorithm (with a poly-logarithmic multiplicative factor). Our algorithms are based on reductions to the related problems of finding $k$-trees of minimum poise (sum of maximum degree and diameter) and applying a combination of greedy network decomposition techniques and set covering under partition matroid constraints.

Authors: Daniel Hathcock, Guy Kortsarz, R. Ravi

We consider minimum time multicasting problems in directed and undirected graphs: given a root node and a subset of $t$ terminal nodes, multicasting seeks to find the minimum number of rounds within which all terminals can be informed with a message originating at the root. In each round, the telephone model we study allows the information to move via a matching from the informed nodes to the uninformed nodes. Since minimum time multicasting in digraphs is poorly understood compared to the undirected variant, we study an intermediate problem in undirected graphs that specifies a target $k < t$, and requires the only $k$ of the terminals be informed in the minimum number of rounds. For this problem, we improve implications of prior results and obtain an $\tilde{O}(t^{1/3})$ multiplicative approximation. For the directed version, we obtain an {\em additive} $\tilde{O}(k^{1/2})$ approximation algorithm (with a poly-logarithmic multiplicative factor). Our algorithms are based on reductions to the related problems of finding $k$-trees of minimum poise (sum of maximum degree and diameter) and applying a combination of greedy network decomposition techniques and set covering under partition matroid constraints.

Wednesday, October 02

TCS+ talk: Wednesday, October 9 — Renato Ferreira Pinto Jr, U Waterloo

from TCS+ Seminar Series

The next TCS+ talk will take place this coming Wednesday, October 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Renato Ferreira Pinto Jr from U Waterloo will speak about “”Directed Isoperimetry and Monotonicity Testing: A Dynamical Approach“” (abstract below). You can reserve a spot as an individual […]

The next TCS+ talk will take place this coming Wednesday, October 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Renato Ferreira Pinto Jr from U Waterloo will speak about “”Directed Isoperimetry and Monotonicity Testing: A Dynamical Approach“” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: This work explores the connection between classical isoperimetric inequalities, their directed analogues, and monotonicity testing. We study the setting of real-valued functions f : [0,1]^d \to \mathbb{R} on the solid unit cube, where the goal is to test with respect to the L^p distance. Our goals are twofold: to further understand the relationship between classical and directed isoperimetry, and to give a monotonicity tester with sublinear query complexity in this setting.

Our main results are 1) an L^2 monotonicity tester for M-Lipschitz functions with query complexity \widetilde O(\sqrt{d} M^2 / \epsilon^2) and, behind this result, 2) the directed Poincaré inequality \mathsf{dist}^{\mathsf{mono}}_2(f)^2 \le C \mathbb{E}[|\nabla^- f|^2], where the “”directed gradient”” operator \nabla^- measures the local violations of monotonicity of f.

To prove the second result, we introduce a partial differential equation (PDE), the directed heat equation, which takes a one-dimensional function f into a monotone function f^* over time and enjoys many desirable analytic properties. We obtain the directed Poincaré inequality by combining convergence aspects of this PDE with the theory of optimal transport. Crucially for our conceptual motivation, this proof is in complete analogy with the mathematical physics perspective on the classical Poincaré inequality, namely as characterizing the convergence of the standard heat equation toward equilibrium.

By plustcs

TR24-146 | Lower Bounds on the Overhead of Indistinguishability Obfuscation | Zhenjian Lu, Noam Mazor, Igor Oliveira, Rafael Pass

from ECCC Papers

We consider indistinguishability obfuscation (iO) for multi-output circuits $C:\{0,1\}^n\to\{0,1\}^n$ of size s, where s is the number of AND/OR/NOT gates in C. Under the worst-case assumption that NP $\nsubseteq$ BPP, we establish that there is no efficient indistinguishability obfuscation scheme that outputs circuits of size $s + o(s/ \log s)$. In other words, to be secure, an efficient iO scheme must incur an $\Omega(s/ \log s)$ additive overhead in the size of the obfuscated circuit. The hardness assumption under which this negative result holds is minimal since an optimal iO scheme with no circuit size overhead exists if NP$\nsubseteq$ BPP. Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database). The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.

We consider indistinguishability obfuscation (iO) for multi-output circuits $C:\{0,1\}^n\to\{0,1\}^n$ of size s, where s is the number of AND/OR/NOT gates in C. Under the worst-case assumption that NP $\nsubseteq$ BPP, we establish that there is no efficient indistinguishability obfuscation scheme that outputs circuits of size $s + o(s/ \log s)$. In other words, to be secure, an efficient iO scheme must incur an $\Omega(s/ \log s)$ additive overhead in the size of the obfuscated circuit. The hardness assumption under which this negative result holds is minimal since an optimal iO scheme with no circuit size overhead exists if NP$\nsubseteq$ BPP. Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database). The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.

TR24-145 | Provability of the Circuit Size Hierarchy and Its Consequences | Marco Carmosino, Valentine Kabanets, Antonina Kolokolova, Igor Carboni Oliveira, Dimitrios Tsintsilidas

from ECCC Papers

The Circuit Size Hierarchy CSH$^a_b$ states that if $a > b \geq 1$ then the set of functions on $n$ variables computed by Boolean circuits of size $n^a$ is strictly larger than the set of functions computed by circuits of size $n^b$. This result, which is a cornerstone of circuit complexity theory, follows from the non-constructive proof of the existence of functions of large circuit complexity obtained by Shannon in 1949. Are there more "constructive" proofs of the Circuit Size Hierarchy? Can we quantify this? Motivated by these questions, we investigate the provability of CSH$^a_b$ in theories of bounded arithmetic. Among other contributions, we establish the following results: (1) Given any $a > b > 1$, CSH$^a_b$ is provable in Buss's theory T$^2_2$. (2) In contrast, if there are constants $a > b > 1$ such that CSH$^a_b$ is provable in the theory T$^1_2$, then there is a constant $\varepsilon > 0$ such that $P^{NP}$ requires non-uniform circuits of size $n^{1 + \varepsilon}$. In other words, an improved upper bound on the proof complexity of CSH$^a_b$ would lead to new lower bounds in complexity theory. We complement these results with a proof of the Formula Size Hierarchy (FSH$^a_b$) in PV$_1$ with parameters $a > 2$ and $b = 3/2$. This is in contrast with typical formalizations of complexity lower bounds in bounded arithmetic, which require APC$_1$ or stronger theories and are not known to hold even in T$^1_2$.

The Circuit Size Hierarchy CSH$^a_b$ states that if $a > b \geq 1$ then the set of functions on $n$ variables computed by Boolean circuits of size $n^a$ is strictly larger than the set of functions computed by circuits of size $n^b$. This result, which is a cornerstone of circuit complexity theory, follows from the non-constructive proof of the existence of functions of large circuit complexity obtained by Shannon in 1949. Are there more "constructive" proofs of the Circuit Size Hierarchy? Can we quantify this? Motivated by these questions, we investigate the provability of CSH$^a_b$ in theories of bounded arithmetic. Among other contributions, we establish the following results: (1) Given any $a > b > 1$, CSH$^a_b$ is provable in Buss's theory T$^2_2$. (2) In contrast, if there are constants $a > b > 1$ such that CSH$^a_b$ is provable in the theory T$^1_2$, then there is a constant $\varepsilon > 0$ such that $P^{NP}$ requires non-uniform circuits of size $n^{1 + \varepsilon}$. In other words, an improved upper bound on the proof complexity of CSH$^a_b$ would lead to new lower bounds in complexity theory. We complement these results with a proof of the Formula Size Hierarchy (FSH$^a_b$) in PV$_1$ with parameters $a > 2$ and $b = 3/2$. This is in contrast with typical formalizations of complexity lower bounds in bounded arithmetic, which require APC$_1$ or stronger theories and are not known to hold even in T$^1_2$.

Lecturer, Theory of Computing at MIT (apply by October 15, 2024)

from CCI: jobs

LECTURER, THEORY OF COMPUTING (TOC), to assume a position with a focus on teaching introductory undergraduate theoretical computer science courses. The position start date could be as soon as January 15, 2025, though it may be negotiated. Applications will be evaluated on a rolling basis starting October 15, 2024. For more information, see the URL […]

LECTURER, THEORY OF COMPUTING (TOC), to assume a position with a focus on teaching introductory undergraduate theoretical computer science courses. The position start date could be as soon as January 15, 2025, though it may be negotiated. Applications will be evaluated on a rolling basis starting October 15, 2024. For more information, see the URL below.

Website: https://www.eecs.mit.edu/career-opportunities-at-eecs/
Email: rcm@mit.edu

By shacharlovett

Favorite Theorems: Gradient Descent

from Computational Complexity

September Edition

Who thought the algorithm behind machine learning would have cool complexity implications?

The Complexity of Gradient Descent: CLS = PPAD ∩ PLS John Fearnley, Paul Goldberg, Alexandros Hollender and Rahul Savani

Let's unpack these classes, subclasses of TFNP, where for every input we know there is an easily verifiable solution and we are looking at the complexity of finding it. PPAD is the class famous for having finding a Nash Equilibrium as a complete set, see this post for details.

PLS is the class of problems where we look for a local minimum. Finding a global minimum is NP-complete--think vertex cover. Finding a local minimum is often easier but still could be hard if you are optimizing over an exponential set of values.

CLS is a bit trickier to define, basically you are finding an approximate local minimum of a function mapping three real variables to one real value.

The authors show that gradient descent is complete for PPAD ∩ PLS even if you only use two input variables. Since gradient descent is in CLS, the equality follows. 

More in my 2021 post. On that post author Paul Goldberg commented

The paper is a fine example of the humorous stereotype of complexity theorists proving a problem "hard" when meanwhile the problem is being routinely solved in practice in the real world.

Nevertheless it's a neat complexity result and now officially one of my favorite theorems.

By Lance Fortnow

September Edition

Who thought the algorithm behind machine learning would have cool complexity implications?

The Complexity of Gradient Descent: CLS = PPAD ∩ PLS
John Fearnley, Paul Goldberg, Alexandros Hollender and Rahul Savani

Let's unpack these classes, subclasses of TFNP, where for every input we know there is an easily verifiable solution and we are looking at the complexity of finding it. PPAD is the class famous for having finding a Nash Equilibrium as a complete set, see this post for details.

PLS is the class of problems where we look for a local minimum. Finding a global minimum is NP-complete--think vertex cover. Finding a local minimum is often easier but still could be hard if you are optimizing over an exponential set of values.

CLS is a bit trickier to define, basically you are finding an approximate local minimum of a function mapping three real variables to one real value.

The authors show that gradient descent is complete for PPAD ∩ PLS even if you only use two input variables. Since gradient descent is in CLS, the equality follows. 

More in my 2021 post. On that post author Paul Goldberg commented

The paper is a fine example of the humorous stereotype of complexity theorists proving a problem "hard" when meanwhile the problem is being routinely solved in practice in the real world.

Nevertheless it's a neat complexity result and now officially one of my favorite theorems.

By Lance Fortnow

Celebrating Irrationality: Frank Calegari, Vesselin Dimitrov, and Yunqing Tang Proved the Irrationality of 1/1²-1/2²+1/4²-1/5²+1/7²-1/8²+ …

from Gil Kalai

There are very many irrational numbers but proving irrationality of a specific number is not a common event. A few weeks ago Frank Calegari, Vesselin Dimitrov, and Yunqing Tang posted a paper that proved the irrationality of  . In fact … Continue reading →

There are very many irrational numbers but proving irrationality of a specific number is not a common event. A few weeks ago Frank Calegari, Vesselin Dimitrov, and Yunqing Tang posted a paper that proved the irrationality of 

\displaystyle L(2,\chi_{-3}) = \frac {1}{1^2} -\frac {1}{2^2} +\frac {1}{4^2}-\frac{1}{5^2} +\frac {1}{7^2}-\frac{1}{8^2}+\cdots.

In fact they proved even more: that 1, \zeta (2), and  L(2,\chi_{-3}), are linearly independent over \mathbb Q. This is, of course, a fantastic result.

According to the short abstract, “The argument applies a new kind of arithmetic holonomy bound to a well-known construction of Zagier.”

h/t to Ido Kaminer and members of the “Ramanujan machine team” who told about this result in a recent workshop.

irrationality

Shana Tova to all the readers: happy new (Jewish) year!

Some more links: Apéry’s 1979 theorem that ζ(3) is irrational. A blog post on Persiflage about the new irrationality result. A videotaped lecture by Frank Calegari.  

By Gil Kalai

Lazy brute-force sampling: A universal perfect sampling scheme from Markov chains

from arXiv: Computational Complexity

Authors: Andreas Göbel, Marcus Pappik

We show that, under mild assumptions, every distribution on the hypercube $\{0, 1\}^{n}$ that admits a polynomial-time Markov chain approximate sampler also has an exact sampling algorithm with expected running time in poly$(n)$.

Authors: Andreas Göbel, Marcus Pappik

We show that, under mild assumptions, every distribution on the hypercube $\{0, 1\}^{n}$ that admits a polynomial-time Markov chain approximate sampler also has an exact sampling algorithm with expected running time in poly$(n)$.

ACEV: Unsupervised Intersecting Manifold Segmentation using Adaptation to Angular Change of Eigenvectors in Intrinsic Dimension

from arXiv: Computational Geometry

Authors: Subhadip Boral, Rikathi Pal, Ashish Ghosh

Intersecting manifold segmentation has been a focus of research, where individual manifolds, that intersect with other manifolds, are separated to discover their distinct properties. The proposed method is based on the intuition that when a manifold in $D$ dimensional space with an intrinsic dimension of $d$ intersects with another manifold, the data variance grows in more than $d$ directions. The proposed method measures local data variances and determines their vector directions. It counts the number of vectors with non-zero variance, which determines the manifold's intrinsic dimension. For detection of the intersection region, the method adapts to the changes in the angular gaps between the corresponding direction vectors of the child and parent using exponential moving averages using a tree structure construction. Accordingly, it includes those data points in the same manifold whose neighborhood is within the adaptive angular difference and eventually identifies the data points in the intersection area of manifolds. Data points whose inclusion in the neighborhood-identified data points increases their intrinsic dimensionality are removed based on data variance and distance. The proposed method performs better than 18 SOTA manifold segmentation methods in ARI and NMI scores over 14 real-world datasets with lesser time complexity and better stability.

Authors: Subhadip Boral, Rikathi Pal, Ashish Ghosh

Intersecting manifold segmentation has been a focus of research, where individual manifolds, that intersect with other manifolds, are separated to discover their distinct properties. The proposed method is based on the intuition that when a manifold in $D$ dimensional space with an intrinsic dimension of $d$ intersects with another manifold, the data variance grows in more than $d$ directions. The proposed method measures local data variances and determines their vector directions. It counts the number of vectors with non-zero variance, which determines the manifold's intrinsic dimension. For detection of the intersection region, the method adapts to the changes in the angular gaps between the corresponding direction vectors of the child and parent using exponential moving averages using a tree structure construction. Accordingly, it includes those data points in the same manifold whose neighborhood is within the adaptive angular difference and eventually identifies the data points in the intersection area of manifolds. Data points whose inclusion in the neighborhood-identified data points increases their intrinsic dimensionality are removed based on data variance and distance. The proposed method performs better than 18 SOTA manifold segmentation methods in ARI and NMI scores over 14 real-world datasets with lesser time complexity and better stability.

FPT Algorithms for Crossing Number Problems: A Unified Approach

from arXiv: Computational Geometry

Authors: Éric Colin de Verdière, Petr Hliněný

The basic crossing number problem is to determine the minimum number of crossings in a topological drawing of an input graph in the plane. In this paper, we develop fixed-parameter tractable (FPT) algorithms for various generalized crossing number problems in the plane or on surfaces. Our first result is on the color-constrained crossing problem, in which edges of the input graph G are colored, and one looks for a drawing of G in the plane or on a given surface in which the total number of crossings involving edges of colors i and j does not exceed a given upper bound Mij. We give an algorithm for this problem that is FPT in the total number of crossings allowed and the genus of the surface. It readily implies an FPT algorithm for the joint crossing number problem. We also give new FPT algorithms for several other graph drawing problems, such as the skewness, the edge crossing number, the splitting number, the gap-planar crossing number, and their generalizations to surfaces. Our algorithms are reductions to the embeddability of a graph on a two-dimensional simplicial complex, which admits an FPT algorithm by a result of Colin de Verdi\`ere and Magnard [ESA 2021].

Authors: Éric Colin de Verdière, Petr Hliněný

The basic crossing number problem is to determine the minimum number of crossings in a topological drawing of an input graph in the plane. In this paper, we develop fixed-parameter tractable (FPT) algorithms for various generalized crossing number problems in the plane or on surfaces. Our first result is on the color-constrained crossing problem, in which edges of the input graph G are colored, and one looks for a drawing of G in the plane or on a given surface in which the total number of crossings involving edges of colors i and j does not exceed a given upper bound Mij. We give an algorithm for this problem that is FPT in the total number of crossings allowed and the genus of the surface. It readily implies an FPT algorithm for the joint crossing number problem. We also give new FPT algorithms for several other graph drawing problems, such as the skewness, the edge crossing number, the splitting number, the gap-planar crossing number, and their generalizations to surfaces. Our algorithms are reductions to the embeddability of a graph on a two-dimensional simplicial complex, which admits an FPT algorithm by a result of Colin de Verdi\`ere and Magnard [ESA 2021].

Better Boosting of Communication Oracles, or Not

from arXiv: Data Structures and Algorithms

Authors: Nathaniel Harms, Artur Riazanov

Suppose we have a two-party communication protocol for $f$ which allows the parties to make queries to an oracle computing $g$; for example, they may query an Equality oracle. To translate this protocol into a randomized protocol, we must replace the oracle with a randomized subroutine for solving $g$. If $q$ queries are made, the standard technique requires that we boost the error of each subroutine down to $O(1/q)$, leading to communication complexity which grows as $q \log q$. For which oracles $g$ can this naive boosting technique be improved? We focus on the oracles which can be computed by constant-cost randomized protocols, and show that the naive boosting strategy can be improved for the Equality oracle but not the 1-Hamming Distance oracle. Two surprising consequences are (1) a new example of a problem where the cost of computing $k$ independent copies grows superlinear in $k$, drastically simplifying the only previous example due to Blais & Brody (CCC 2019); and (2) a new proof that Equality is not complete for the class of constant-cost randomized communication (Harms, Wild, & Zamaraev, STOC 2022; Hambardzumyan, Hatami, & Hatami, Israel Journal of Mathematics 2022).

Authors: Nathaniel Harms, Artur Riazanov

Suppose we have a two-party communication protocol for $f$ which allows the parties to make queries to an oracle computing $g$; for example, they may query an Equality oracle. To translate this protocol into a randomized protocol, we must replace the oracle with a randomized subroutine for solving $g$. If $q$ queries are made, the standard technique requires that we boost the error of each subroutine down to $O(1/q)$, leading to communication complexity which grows as $q \log q$. For which oracles $g$ can this naive boosting technique be improved? We focus on the oracles which can be computed by constant-cost randomized protocols, and show that the naive boosting strategy can be improved for the Equality oracle but not the 1-Hamming Distance oracle. Two surprising consequences are (1) a new example of a problem where the cost of computing $k$ independent copies grows superlinear in $k$, drastically simplifying the only previous example due to Blais & Brody (CCC 2019); and (2) a new proof that Equality is not complete for the class of constant-cost randomized communication (Harms, Wild, & Zamaraev, STOC 2022; Hambardzumyan, Hatami, & Hatami, Israel Journal of Mathematics 2022).

Representation of Classical Data on Quantum Computers

from arXiv: Data Structures and Algorithms

Authors: Thomas Lang, Anja Heim, Kilian Dremel, Dimitri Prjamkov, Martin Blaimer, Markus Firsching, Anastasia Papadaki, Stefan Kasperl, Theobald OJ Fuchs

Quantum computing is currently gaining significant attention, not only from the academic community but also from industry, due to its potential applications across several fields for addressing complex problems. For any practical problem which may be tackled using quantum computing, it is imperative to represent the data used onto a quantum computing system. Depending on the application, many different types of data and data structures occur, including regular numbers, higher-dimensional data structures, e.g., n-dimensional images, up to graphs. This report aims to provide an overview of existing methods for representing these data types on gate-based quantum computers.

Authors: Thomas Lang, Anja Heim, Kilian Dremel, Dimitri Prjamkov, Martin Blaimer, Markus Firsching, Anastasia Papadaki, Stefan Kasperl, Theobald OJ Fuchs

Quantum computing is currently gaining significant attention, not only from the academic community but also from industry, due to its potential applications across several fields for addressing complex problems. For any practical problem which may be tackled using quantum computing, it is imperative to represent the data used onto a quantum computing system. Depending on the application, many different types of data and data structures occur, including regular numbers, higher-dimensional data structures, e.g., n-dimensional images, up to graphs. This report aims to provide an overview of existing methods for representing these data types on gate-based quantum computers.

Circuit and Graver Walks and Linear and Integer Programming

from arXiv: Data Structures and Algorithms

Authors: Shmuel Onn

We show that a circuit walk from a given feasible point of a given linear program to an optimal point can be computed in polynomial time using only linear algebra operations and the solution of the single given linear program. We also show that a Graver walk from a given feasible point of a given integer program to an optimal point is polynomial time computable using an integer programming oracle, but without such an oracle, it is hard to compute such a walk even if an optimal solution to the given program is given as well. Combining our oracle algorithm with recent results on sparse integer programming, we also show that Graver walks from any point are polynomial time computable over matrices of bounded tree-depth and subdeterminants.

Authors: Shmuel Onn

We show that a circuit walk from a given feasible point of a given linear program to an optimal point can be computed in polynomial time using only linear algebra operations and the solution of the single given linear program. We also show that a Graver walk from a given feasible point of a given integer program to an optimal point is polynomial time computable using an integer programming oracle, but without such an oracle, it is hard to compute such a walk even if an optimal solution to the given program is given as well. Combining our oracle algorithm with recent results on sparse integer programming, we also show that Graver walks from any point are polynomial time computable over matrices of bounded tree-depth and subdeterminants.

$k$-local Graphs

from arXiv: Data Structures and Algorithms

Authors: Christian Beth, Pamela Fleischmann, Annika Huch, Daniyal Kazempour, Peer Kröger, Andrea Kulow, Matthias Renz

In 2017 Day et al. introduced the notion of locality as a structural complexity-measure for patterns in the field of pattern matching established by Angluin in 1980. In 2019 Casel et al. showed that determining the locality of an arbitrary pattern is NP-complete. Inspired by hierarchical clustering, we extend the notion to coloured graphs, i.e., given a coloured graph determine an enumeration of the colours such that colouring the graph stepwise according to the enumeration leads to as few clusters as possible. Next to first theoretical results on graph classes, we propose a priority search algorithm to compute the $k$-locality of a graph. The algorithm is optimal in the number of marking prefix expansions, and is faster by orders of magnitude than an exhaustive search. Finally, we perform a case study on a DBLP subgraph to demonstrate the potential of $k$-locality for knowledge discovery.

Authors: Christian Beth, Pamela Fleischmann, Annika Huch, Daniyal Kazempour, Peer Kröger, Andrea Kulow, Matthias Renz

In 2017 Day et al. introduced the notion of locality as a structural complexity-measure for patterns in the field of pattern matching established by Angluin in 1980. In 2019 Casel et al. showed that determining the locality of an arbitrary pattern is NP-complete. Inspired by hierarchical clustering, we extend the notion to coloured graphs, i.e., given a coloured graph determine an enumeration of the colours such that colouring the graph stepwise according to the enumeration leads to as few clusters as possible. Next to first theoretical results on graph classes, we propose a priority search algorithm to compute the $k$-locality of a graph. The algorithm is optimal in the number of marking prefix expansions, and is faster by orders of magnitude than an exhaustive search. Finally, we perform a case study on a DBLP subgraph to demonstrate the potential of $k$-locality for knowledge discovery.

FPT Approximations for Fair $k$-Min-Sum-Radii

from arXiv: Data Structures and Algorithms

Authors: Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, Melanie Schmidt

We consider the $k$-min-sum-radii ($k$-MSR) clustering problem with fairness constraints. The $k$-min-sum-radii problem is a mixture of the classical $k$-center and $k$-median problems. We are given a set of points $P$ in a metric space and a number $k$ and aim to partition the points into $k$ clusters, each of the clusters having one designated center. The objective to minimize is the sum of the radii of the $k$ clusters (where in $k$-center we would only consider the maximum radius and in $k$-median we would consider the sum of the individual points' costs). Various notions of fair clustering have been introduced lately, and we follow the definitions due to Chierichetti, Kumar, Lattanzi and Vassilvitskii [NeurIPS 2017] which demand that cluster compositions shall follow the proportions of the input point set with respect to some given sensitive attribute. For the easier case where the sensitive attribute only has two possible values and each is equally frequent in the input, the aim is to compute a clustering where all clusters have a 1:1 ratio with respect to this attribute. We call this the 1:1 case. There has been a surge of FPT-approximation algorithms for the $k$-MSR problem lately, solving the problem both in the unconstrained case and in several constrained problem variants. We add to this research area by designing an FPT $(6+\epsilon)$-approximation that works for $k$-MSR under the mentioned general fairness notion. For the special 1:1 case, we improve our algorithm to achieve a $(3+\epsilon)$-approximation.

Authors: Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, Melanie Schmidt

We consider the $k$-min-sum-radii ($k$-MSR) clustering problem with fairness constraints. The $k$-min-sum-radii problem is a mixture of the classical $k$-center and $k$-median problems. We are given a set of points $P$ in a metric space and a number $k$ and aim to partition the points into $k$ clusters, each of the clusters having one designated center. The objective to minimize is the sum of the radii of the $k$ clusters (where in $k$-center we would only consider the maximum radius and in $k$-median we would consider the sum of the individual points' costs). Various notions of fair clustering have been introduced lately, and we follow the definitions due to Chierichetti, Kumar, Lattanzi and Vassilvitskii [NeurIPS 2017] which demand that cluster compositions shall follow the proportions of the input point set with respect to some given sensitive attribute. For the easier case where the sensitive attribute only has two possible values and each is equally frequent in the input, the aim is to compute a clustering where all clusters have a 1:1 ratio with respect to this attribute. We call this the 1:1 case. There has been a surge of FPT-approximation algorithms for the $k$-MSR problem lately, solving the problem both in the unconstrained case and in several constrained problem variants. We add to this research area by designing an FPT $(6+\epsilon)$-approximation that works for $k$-MSR under the mentioned general fairness notion. For the special 1:1 case, we improve our algorithm to achieve a $(3+\epsilon)$-approximation.

A Note on Approximation of Spanning Tree Congestion

from arXiv: Data Structures and Algorithms

Authors: Petr Kolman

The {\em Spanning Tree Congestion} problem is an easy-to-state NP-hard problem: given a graph $G$, construct a spanning tree $T$ of $G$ minimizing its maximum edge congestion where the congestion of an edge $e\in T$ is the number of edges $uv$ in $G$ such that the unique path between $u$ and $v$ in $T$ passes through $e$; the optimum value for a given graph $G$ is denoted $STC(G)$. It is known that {\em every} spanning tree is an $n/2$-approximation. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an $O(\Delta\cdot\log^{3/2}n)$-approximation algorithm for the minimum congestion spanning tree problem where $\Delta$ is the maximum degree in $G$. For graphs with maximum degree bounded by polylog of the number of vertices, this is an exponential improvement over the previous best approximation. For graphs with maximum degree bounded by $o(n/\log^{3/2}n)$, we get $o(n)$-approximation; this is the largest class of graphs that we know of, for which sublinear approximation is known for this problem. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. We prove that for every graph $G$, $STC(G)\geq \Omega(hb(G)/\Delta)$ where $hb(G)$ denotes the maximum bisection width over all subgraphs of $G$.

Authors: Petr Kolman

The {\em Spanning Tree Congestion} problem is an easy-to-state NP-hard problem: given a graph $G$, construct a spanning tree $T$ of $G$ minimizing its maximum edge congestion where the congestion of an edge $e\in T$ is the number of edges $uv$ in $G$ such that the unique path between $u$ and $v$ in $T$ passes through $e$; the optimum value for a given graph $G$ is denoted $STC(G)$. It is known that {\em every} spanning tree is an $n/2$-approximation. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an $O(\Delta\cdot\log^{3/2}n)$-approximation algorithm for the minimum congestion spanning tree problem where $\Delta$ is the maximum degree in $G$. For graphs with maximum degree bounded by polylog of the number of vertices, this is an exponential improvement over the previous best approximation. For graphs with maximum degree bounded by $o(n/\log^{3/2}n)$, we get $o(n)$-approximation; this is the largest class of graphs that we know of, for which sublinear approximation is known for this problem. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. We prove that for every graph $G$, $STC(G)\geq \Omega(hb(G)/\Delta)$ where $hb(G)$ denotes the maximum bisection width over all subgraphs of $G$.

Closed Repeats

from arXiv: Data Structures and Algorithms

Authors: Dmitry Kosolobov

Much research in stringology focuses on structures that can, in a way, ``grasp'' repeats (substrings that occur multiple times) as, for example, the so-called runs, a.k.a. maximal repetitions, compactly describe all tandem repeats. In this paper we introduce closed repeats: given a string $s$, its non-empty substring $s[i\,..\,j]$ is a right (left) closed repeat if its closest occurrence $s[i'\,..\,j']$ with $i' > i$ cannot be ``extended'' to the right (respectively, left) matching $s[j{+}1] = s[j'{+}1]$ (respectively, $s[i{-}1] = s[i'{-}1]$); the repeat is closed if it is both left and right closed. We note that the closed repeats correspond to the maximal closed substrings recently proposed by Badkobeh et al. and they include all runs as a special case. We prove that the number of right/left closed repeats is $O(n \log n)$, where $n$ is the length of $s$, and we show that this bound is tight. The (right/left) closed repeats can be computed in the optimal time $O(n\log n)$; as we prove, the computation time cannot be lower than $\Omega(n\log\sigma)$ over a general ordered alphabet of size $\sigma$ even when the number of the closed repeats is $O(n)$. As an application, we describe data structures using the closed repeats for a number of substring queries: finding the period of the substring provided it is ``periodic'', finding the longest repeat in the substring, computing the rightmost LZ77 parsing of the substring.

Authors: Dmitry Kosolobov

Much research in stringology focuses on structures that can, in a way, ``grasp'' repeats (substrings that occur multiple times) as, for example, the so-called runs, a.k.a. maximal repetitions, compactly describe all tandem repeats. In this paper we introduce closed repeats: given a string $s$, its non-empty substring $s[i\,..\,j]$ is a right (left) closed repeat if its closest occurrence $s[i'\,..\,j']$ with $i' > i$ cannot be ``extended'' to the right (respectively, left) matching $s[j{+}1] = s[j'{+}1]$ (respectively, $s[i{-}1] = s[i'{-}1]$); the repeat is closed if it is both left and right closed. We note that the closed repeats correspond to the maximal closed substrings recently proposed by Badkobeh et al. and they include all runs as a special case. We prove that the number of right/left closed repeats is $O(n \log n)$, where $n$ is the length of $s$, and we show that this bound is tight. The (right/left) closed repeats can be computed in the optimal time $O(n\log n)$; as we prove, the computation time cannot be lower than $\Omega(n\log\sigma)$ over a general ordered alphabet of size $\sigma$ even when the number of the closed repeats is $O(n)$. As an application, we describe data structures using the closed repeats for a number of substring queries: finding the period of the substring provided it is ``periodic'', finding the longest repeat in the substring, computing the rightmost LZ77 parsing of the substring.

Tuesday, October 01

Sad times for AI safety

from Scott Aaronson

Many of you will have seen the news that Governor Gavin Newsom has vetoed SB 1047, the groundbreaking AI safety bill that overwhelmingly passed the California legislature. Newsom gave a disingenuous explanation (which no one on either side of the debate took seriously), that he vetoed the bill only because it didn’t go far enough […]

Many of you will have seen the news that Governor Gavin Newsom has vetoed SB 1047, the groundbreaking AI safety bill that overwhelmingly passed the California legislature. Newsom gave a disingenuous explanation (which no one on either side of the debate took seriously), that he vetoed the bill only because it didn’t go far enough (!!) in regulating the misuses of small models. While sad, this doesn’t come as a huge shock, as Newsom had given clear prior indications that he was likely to veto the bill, and many observers had warned to expect him to do whatever he thought would most further his political ambitions and/or satisfy his strongest lobbyists. In any case, I’m reluctantly forced to the conclusion that either Governor Newsom doesn’t read Shtetl-Optimized, or else he somehow wasn’t persuaded by my post last month in support of SB 1047.

Many of you will also have seen the news that OpenAI will change its structure to be a fully for-profit company, abandoning any pretense of being controlled by a nonprofit, and that (possibly relatedly) almost no one now remains from OpenAI’s founding team other than Sam Altman himself. It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded. It’s a shame the board didn’t manage to explain its concerns clearly at the time, to OpenAI’s employees or to the wider world. Of course, whether you see the new developments as good or bad is up to you. Me, I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!

Anyway, certainly you would’ve known all this if you read Zvi Mowshowitz. Broadly speaking, there’s nothing I can possibly say about AI safety policy that Zvi hasn’t already said in 100x more detail, anticipating and responding to every conceivable counterargument. I have no clue how he does it, but if you have any interest in these matters and you aren’t already reading Zvi, start.

Regardless of any setbacks, the work of AI safety continues. I am not and have never been a Yudkowskyan … but still, given the empirical shock of the past four years, I’m now firmly, 100% in the camp that we need to approach AI with humility for the magnitude of civilizational transition that’s about to occur, and for our massive error bars about what exactly that transition will entail. We can’t just “leave it to the free market” any more than we could’ve left the development of thermonuclear weapons to the free market.

And yes, whether in academia or working with AI companies, I’ll continue to think about what theoretical computer science can do for technical AI safety. Speaking of which, I’d love to hire a postdoc to work on AI alignment and safety, and I already have interested candidates. Would any person of means who reads this blog like to fund such a postdoc for me? If so, shoot me an email!

By Scott

Ends in a draw

from Ben Recht

A geometric argument for strong duality

This is the live blog of Lecture 10 of my graduate class “Convex Optimization.” A Table of Contents is here.

Last week, I showed how weak duality is an almost tautological condition. But convex problems also (for the most part) have strong duality, where the optimal value of the dual problem is equal to the optimal value of the primal problem. There are many different intuitions for why strong duality holds for convex problems, but the argument I like the most (and the one in Boyd and Vandenberghe) leans on separating hyperplanes.

Today’s blog has more notation than I’d like. Duality, it seems, is hard to motivate without getting technical! But my goal is to set up two pictures that illustrate why we should expect strong duality for convex problems and not nonconvex ones. Hopefully, those illustrations provide some intuition.

Let’s focus on a simple optimization problem today

Let p* denote the optimal value of this problem. To prove strong duality for this problem, we want to show that there is some value of the Lagrange multiplier, 𝜆, such that the Lagrangian provides an upper bound of the optimal solution

Let’s think about what this condition says geometrically. Consider the graph of the optimization problem 

Then strong duality holds if we can find a 𝜆 where

For all (u,t) in the graph with u nonpositive.

This means we are trying to find a hyperplane that contains the graph of the optimization problem. Since we are looking at a problem with half spaces, we should turn the graph into a convex set. Let me define an “epigraph” of the optimization problem:

The desired strong duality condition amounts to a hyperplane that supports A at (0,t) and is sloping downward as in this picture:

Stare at the picture until you convince yourself it is the right condition. I always have to wrap my head around the mapping from the declarative form of the optimization into this geometric object. From this geometric view, the optimization problem is equivalent to minimizing the value of t in A with u=0. That is

The dual problem is maximizing the t-intercept of hyperplanes containing A. Indeed, for any value of 𝜆, the dual function value is the t-intercept of the associated hyperplane.

A picture also shows us what might prevent strong duality in the case of nonconvex functions.

Here, the t-intercept can’t reach the optimal primal value as it’s blocked by the nonconvex geometry.

These pictures suggest a path to verifying strong duality by showing there is a supporting hyperplane at the point (0,p*). To do this, define another set

A and B don’t intersect, so there is always a separating hyperplane 

By the definition of A, 𝜶 and β both have to be nonnegative or else the affine function they define would be unbounded below on A. If 𝜶 is positive, then we can set 𝜆 = β/𝜶,  and we will have proven strong duality.

To guarantee that 𝜶 is positive, you have to assume some extra facts about the optimization problem. These assumptions are called constraint qualifications. The only two most people need to know are (a) linear programming has strong duality and (b) if there is a point with f_1(x) strictly less than zero, you have strong duality. The second condition is called Slater’s condition.

Geometrically, strong duality was almost inevitable for convex functions. The Lagrangian was a hyperplane in disguise. We turned a problem about a convex set into a problem about the hyperplanes that contain that set. The duality between convex sets and the half spaces containing them is the fundamental property of convexity that enables all convex optimization.

Subscribe now

By Ben Recht

postdoc at Institute of Fundamental Technological Research, Polish Academy of Sciences (apply by October 31, 2024)

from CCI: jobs

2 postdoc positions will be opened at the Institute of Fundamental Technological Research, Polish Academy of Sciences, in Warsaw, Poland. We will be seeking candidates with background in at least one: – theoretical computer science – logic – computational learning theory – computational social choice – artificial intelligence – machine learning Predoctoral candidates can be […]

2 postdoc positions will be opened at the Institute of Fundamental Technological Research, Polish Academy of Sciences, in Warsaw, Poland. We will be seeking candidates with background in at least one: – theoretical computer science
– logic
– computational learning theory
– computational social choice
– artificial intelligence
– machine learning

Predoctoral candidates can be also considered.

Website: https://www.ippt.pan.pl/en/
Email: tsteifer@ippt.pan.pl

By shacharlovett

Quantum Fast Implementation of Private Information Retrieval and Functional Bootstrapping

from arXiv: Computational Complexity

Authors: Guangsheng Ma, Hongbo Li

Quantum computation has found greater efficiency and security across various fields. We show that, in a near-term hybrid cloud computing scenario with only one single quantum server and an entirely classical client, critical bottlenecks in privacy-preserving computation can be addressed. First, we propose an efficient quantum functional bootstrapping algorithm with a runtime polynomial in the plaintext-size, providing an exponential quantum speedup over classical algorithms. Second, we present a secure and fast quantum private information retrieval protocol with logarithmic query time. The security relies on the learning with errors (LWE) problem with polynomial modulus, greatly improving the security of classical fast PIR protocol based on ring-LWE with super-polynomial modulus. Technically, we extend an important classical homomorphic operation, known as blind rotation, to the quantum case by an encrypted conditional rotation technique. This technique holds promise for broader applications in quantum cryptography.

Authors: Guangsheng Ma, Hongbo Li

Quantum computation has found greater efficiency and security across various fields. We show that, in a near-term hybrid cloud computing scenario with only one single quantum server and an entirely classical client, critical bottlenecks in privacy-preserving computation can be addressed. First, we propose an efficient quantum functional bootstrapping algorithm with a runtime polynomial in the plaintext-size, providing an exponential quantum speedup over classical algorithms. Second, we present a secure and fast quantum private information retrieval protocol with logarithmic query time. The security relies on the learning with errors (LWE) problem with polynomial modulus, greatly improving the security of classical fast PIR protocol based on ring-LWE with super-polynomial modulus. Technically, we extend an important classical homomorphic operation, known as blind rotation, to the quantum case by an encrypted conditional rotation technique. This technique holds promise for broader applications in quantum cryptography.

A Quantum Unique Games Conjecture

from arXiv: Computational Complexity

Authors: Hamoon Mousavi, Taro Spirig

After the NP-hardness of computational problems such as 3SAT and MaxCut was established, a natural next step was to explore whether these problems remain hard to approximate. While the quantum extensions of some of these problems are known to be hard-indeed undecidable-their inapproximability remains largely unresolved. In this work, we introduce definitions for the quantum extensions of Label-Cover and Unique-Label-Cover. We show that these problems play a similarly crucial role in studying the inapproximability of quantum constraint satisfaction problems as they do in the classical setting.

Authors: Hamoon Mousavi, Taro Spirig

After the NP-hardness of computational problems such as 3SAT and MaxCut was established, a natural next step was to explore whether these problems remain hard to approximate. While the quantum extensions of some of these problems are known to be hard-indeed undecidable-their inapproximability remains largely unresolved. In this work, we introduce definitions for the quantum extensions of Label-Cover and Unique-Label-Cover. We show that these problems play a similarly crucial role in studying the inapproximability of quantum constraint satisfaction problems as they do in the classical setting.

Algorithms and complexity for monitoring edge-geodetic sets in graphs

from arXiv: Computational Complexity

Authors: Florent Foucaud, Clara Marcille, R. B. Sandeep, Sagnik Sen, S Taruni

A monitoring edge-geodetic set of a graph is a subset $M$ of its vertices such that for every edge $e$ in the graph, deleting $e$ increases the distance between at least one pair of vertices in $M$. We study the following computational problem \textsc{MEG-set}: given a graph $G$ and an integer $k$, decide whether $G$ has a monitoring edge geodetic set of size at most $k$. We prove that the problem is NP-hard even for 2-apex 3-degenerate graphs, improving a result by Haslegrave (Discrete Applied Mathematics 2023). Additionally, we prove that the problem cannot be solved in subexponential-time, assuming the Exponential-Time Hypothesis, even for 3-degenerate graphs. Further, we prove that the optimization version of the problem is APX-hard, even for 4-degenerate graphs. Complementing these hardness results, we prove that the problem admits a polynomial-time algorithm for interval graphs, a fixed-parameter tractable algorithm for general graphs with clique-width plus diameter as the parameter, and a fixed-parameter tractable algorithm for chordal graphs with treewidth as the parameter. We also provide an approximation algorithm with factor $\ln m\cdot OPT$ and $\sqrt{n\ln m}$ for the optimization version of the problem, where $m$ is the number of edges, $n$ the number of vertices, and $OPT$ is the size of a minimum monitoring edge-geodetic set of the input graph.

Authors: Florent Foucaud, Clara Marcille, R. B. Sandeep, Sagnik Sen, S Taruni

A monitoring edge-geodetic set of a graph is a subset $M$ of its vertices such that for every edge $e$ in the graph, deleting $e$ increases the distance between at least one pair of vertices in $M$. We study the following computational problem \textsc{MEG-set}: given a graph $G$ and an integer $k$, decide whether $G$ has a monitoring edge geodetic set of size at most $k$. We prove that the problem is NP-hard even for 2-apex 3-degenerate graphs, improving a result by Haslegrave (Discrete Applied Mathematics 2023). Additionally, we prove that the problem cannot be solved in subexponential-time, assuming the Exponential-Time Hypothesis, even for 3-degenerate graphs. Further, we prove that the optimization version of the problem is APX-hard, even for 4-degenerate graphs. Complementing these hardness results, we prove that the problem admits a polynomial-time algorithm for interval graphs, a fixed-parameter tractable algorithm for general graphs with clique-width plus diameter as the parameter, and a fixed-parameter tractable algorithm for chordal graphs with treewidth as the parameter. We also provide an approximation algorithm with factor $\ln m\cdot OPT$ and $\sqrt{n\ln m}$ for the optimization version of the problem, where $m$ is the number of edges, $n$ the number of vertices, and $OPT$ is the size of a minimum monitoring edge-geodetic set of the input graph.

Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images

from arXiv: Computational Geometry

Authors: Bahri Batuhan Bilecen, Ahmet Berke Gokmen, Aysegul Dundar

3D GAN inversion aims to project a single image into the latent space of a 3D Generative Adversarial Network (GAN), thereby achieving 3D geometry reconstruction. While there exist encoders that achieve good results in 3D GAN inversion, they are predominantly built on EG3D, which specializes in synthesizing near-frontal views and is limiting in synthesizing comprehensive 3D scenes from diverse viewpoints. In contrast to existing approaches, we propose a novel framework built on PanoHead, which excels in synthesizing images from a 360-degree perspective. To achieve realistic 3D modeling of the input image, we introduce a dual encoder system tailored for high-fidelity reconstruction and realistic generation from different viewpoints. Accompanying this, we propose a stitching framework on the triplane domain to get the best predictions from both. To achieve seamless stitching, both encoders must output consistent results despite being specialized for different tasks. For this reason, we carefully train these encoders using specialized losses, including an adversarial loss based on our novel occlusion-aware triplane discriminator. Experiments reveal that our approach surpasses the existing encoder training methods qualitatively and quantitatively. Please visit the project page: berkegokmen1.github.io/dual-enc-3d-gan-inv.

Authors: Bahri Batuhan Bilecen, Ahmet Berke Gokmen, Aysegul Dundar

3D GAN inversion aims to project a single image into the latent space of a 3D Generative Adversarial Network (GAN), thereby achieving 3D geometry reconstruction. While there exist encoders that achieve good results in 3D GAN inversion, they are predominantly built on EG3D, which specializes in synthesizing near-frontal views and is limiting in synthesizing comprehensive 3D scenes from diverse viewpoints. In contrast to existing approaches, we propose a novel framework built on PanoHead, which excels in synthesizing images from a 360-degree perspective. To achieve realistic 3D modeling of the input image, we introduce a dual encoder system tailored for high-fidelity reconstruction and realistic generation from different viewpoints. Accompanying this, we propose a stitching framework on the triplane domain to get the best predictions from both. To achieve seamless stitching, both encoders must output consistent results despite being specialized for different tasks. For this reason, we carefully train these encoders using specialized losses, including an adversarial loss based on our novel occlusion-aware triplane discriminator. Experiments reveal that our approach surpasses the existing encoder training methods qualitatively and quantitatively. Please visit the project page: https://berkegokmen1.github.io/dual-enc-3d-gan-inv.