Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Tuesday, November 18

Parameterized complexity of scheduling unit-time jobs with generalized precedence constraints

from arXiv: Computational Complexity

Authors: Christina Büsing, Maurice Draeger, Corinna Mathwieser

We study the parameterized complexity of scheduling unit-time jobs on parallel, identical machines under generalized precedence constraints for minimization of the makespan and the sum of completion times. In our setting, each job is equipped with a Boolean formula (precedence constraint) over the set of jobs. A schedule satisfies a job's precedence constraint if setting earlier jobs to true satisfies the formula. Our definition generalizes several common types of precedence constraints: classical and-constraints if every formula is a conjunction, or-constraints if every formula is a disjunction, and and/or-constraints if every formula is in conjunctive normal form. We prove fixed-parameter tractability when parameterizing by the number of predecessors. For parameterization by the number of successors, however, the complexity depends on the structure of the precedence constraints. If every constraint is a conjunction or a disjunction, we prove the problem to be fixed-parameter tractable. For constraints in disjunctive normal form, we prove W[1]-hardness. We show that the and/or-constrained problem is NP-hard, even for a single successor. Moreover, we prove NP-hardness on two machines if every constraint is a conjunction or a disjunction. This result not only proves para-NP-hardness for parameterization by the number of machines but also complements the polynomial-time solvability on two machines if every constraint is a conjunction (Coffman and Graham 1972) or if every constraint is a disjunction (Berit 2005).

Authors: Christina Büsing, Maurice Draeger, Corinna Mathwieser

We study the parameterized complexity of scheduling unit-time jobs on parallel, identical machines under generalized precedence constraints for minimization of the makespan and the sum of completion times. In our setting, each job is equipped with a Boolean formula (precedence constraint) over the set of jobs. A schedule satisfies a job's precedence constraint if setting earlier jobs to true satisfies the formula. Our definition generalizes several common types of precedence constraints: classical and-constraints if every formula is a conjunction, or-constraints if every formula is a disjunction, and and/or-constraints if every formula is in conjunctive normal form. We prove fixed-parameter tractability when parameterizing by the number of predecessors. For parameterization by the number of successors, however, the complexity depends on the structure of the precedence constraints. If every constraint is a conjunction or a disjunction, we prove the problem to be fixed-parameter tractable. For constraints in disjunctive normal form, we prove W[1]-hardness. We show that the and/or-constrained problem is NP-hard, even for a single successor. Moreover, we prove NP-hardness on two machines if every constraint is a conjunction or a disjunction. This result not only proves para-NP-hardness for parameterization by the number of machines but also complements the polynomial-time solvability on two machines if every constraint is a conjunction (Coffman and Graham 1972) or if every constraint is a disjunction (Berit 2005).

Too Many or Too Few? Sampling Bounds for Topological Descriptors

from arXiv: Computational Geometry

Authors: Brittany Terese Fasy, Maksym Makarchuk, Samuel Micka, David L. Millman

Topological descriptors, such as the Euler characteristic function and the persistence diagram, have grown increasingly popular for representing complex data. Recent work showed that a carefully chosen set of these descriptors encodes all of the geometric and topological information about a shape in R^d. In practice, epsilon nets are often used to find samples in one of two extremes. On one hand, making strong geometric assumptions about the shape allows us to choose epsilon small enough (corresponding to a high enough density sample) in order to guarantee a faithful representation, resulting in oversampling. On the other hand, if we choose a larger epsilon in order to allow faster computations, this leads to an incomplete description of the shape and a discretized transform that lacks theoretical guarantees. In this work, we investigate how many directions are really needed to represent geometric simplicial complexes, exploring both synthetic and real-world datasets. We provide constructive proofs that help establish size bounds and an experimental investigation giving insights into the consequences of over- and undersampling.

Authors: Brittany Terese Fasy, Maksym Makarchuk, Samuel Micka, David L. Millman

Topological descriptors, such as the Euler characteristic function and the persistence diagram, have grown increasingly popular for representing complex data. Recent work showed that a carefully chosen set of these descriptors encodes all of the geometric and topological information about a shape in R^d. In practice, epsilon nets are often used to find samples in one of two extremes. On one hand, making strong geometric assumptions about the shape allows us to choose epsilon small enough (corresponding to a high enough density sample) in order to guarantee a faithful representation, resulting in oversampling. On the other hand, if we choose a larger epsilon in order to allow faster computations, this leads to an incomplete description of the shape and a discretized transform that lacks theoretical guarantees. In this work, we investigate how many directions are really needed to represent geometric simplicial complexes, exploring both synthetic and real-world datasets. We provide constructive proofs that help establish size bounds and an experimental investigation giving insights into the consequences of over- and undersampling.

Graded Projection Recursion (GPR): A Framework for Controlling Bit-Complexity of Algebraic Packing

from arXiv: Data Structures and Algorithms

Authors: Jeffrey Uhlmann

Recursive algorithms that use algebraic packing may appear to achieve reduced computational complexity that does not reflect the dominating bit-complexity, i.e., the implemented performance would not exhibit the claimed scaling. This paper introduces Graded Projection Recursion (GPR), a formal framework designed to address this problem by defining a rigorous mechanism that provably controls bit growth if specific conditions are satisfied. We use GPR to develop a matrix multiplication algorithm that is model-honest and achieves a true near-quadratic bit-complexity. This framework also provides a general GPR Substitution Principle recipe for accelerating a significant class of related algorithms in a provably rigorous way.

Authors: Jeffrey Uhlmann

Recursive algorithms that use algebraic packing may appear to achieve reduced computational complexity that does not reflect the dominating bit-complexity, i.e., the implemented performance would not exhibit the claimed scaling. This paper introduces Graded Projection Recursion (GPR), a formal framework designed to address this problem by defining a rigorous mechanism that provably controls bit growth if specific conditions are satisfied. We use GPR to develop a matrix multiplication algorithm that is model-honest and achieves a true near-quadratic bit-complexity. This framework also provides a general GPR Substitution Principle recipe for accelerating a significant class of related algorithms in a provably rigorous way.

An improved approximation algorithm for k-Median

from arXiv: Data Structures and Algorithms

Authors: Neal E. Young

We give a polynomial-time approximation algorithm for the (not necessarily metric) $k$-Median problem. The algorithm is an $α$-size-approximation algorithm for $α< 1 + 2 \ln(n/k)$. That is, it guarantees a solution having size at most $α\times k$, and cost at most the cost of any size-$k$ solution. This is the first polynomial-time approximation algorithm to match the well-known bounds of $H_Δ$ and $1 + \ln(n/k)$ for unweighted Set Cover (a special case) within a constant factor. It matches these bounds within a factor of 2. The algorithm runs in time $O(k m \log(n/k) \log m)$, where $n$ is the number of customers and $m$ is the instance size.

Authors: Neal E. Young

We give a polynomial-time approximation algorithm for the (not necessarily metric) $k$-Median problem. The algorithm is an $α$-size-approximation algorithm for $α< 1 + 2 \ln(n/k)$. That is, it guarantees a solution having size at most $α\times k$, and cost at most the cost of any size-$k$ solution. This is the first polynomial-time approximation algorithm to match the well-known bounds of $H_Δ$ and $1 + \ln(n/k)$ for unweighted Set Cover (a special case) within a constant factor. It matches these bounds within a factor of 2. The algorithm runs in time $O(k m \log(n/k) \log m)$, where $n$ is the number of customers and $m$ is the instance size.

Optimal and Efficient Partite Decompositions of Hypergraphs

from arXiv: Data Structures and Algorithms

Authors: Andrew Krapivin, Benjamin Przybocki, Nicolás Sanhueza-Matamala, Bernardo Subercaseaux

We study the problem of partitioning the edges of a $d$-uniform hypergraph $H$ into a family $F$ of complete $d$-partite hypergraphs ($d$-cliques). We show that there is a partition $F$ in which every vertex $v \in V(H)$ belongs to at most $(\frac{1}{d!} + o_d(1))n^{d-1}/\lg n$ members of $F$. This settles the central question of a line of research initiated by Erdős and Pyber (1997) for graphs, and more recently by Csirmaz, Ligeti, and Tardos (2014) for hypergraphs. The $d=2$ case of this theorem answers a 40-year-old question of Chung, Erdős, and Spencer (1983). An immediate corollary of our result is an improved upper bound for the maximum share size for binary secret sharing schemes on uniform hypergraphs. Building on results of Nechiporuk (1969), we prove that every graph with fixed edge density $γ\in (0,1)$ has a biclique partition of total weight at most $(\tfrac{1}{2}+o(1))\cdot h_2(γ) \frac{n^2}{\lg n}$, where $h_2$ is the binary entropy function. Our construction implies that such biclique partitions can be constructed in time $O(m)$, which answers a question of Feder and Motwani (1995) and also improves upon results of Mubayi and Turán (2010) as well as Chavan, Rabinia, Grosu, and Brocanelli (2025). Using similar techniques, we also give an $n^{1+o(1)}$ algorithm for finding a subgraph $K_{t,t}$ with $t = (1-o(1)) \fracγ{h_2(γ)} \lg n$. Our results show that biclique partitions are information-theoretically optimal representations for graphs at every fixed density. We show that with this succinct representation one can answer independent set queries and cut queries in time $O(n^2/ \lg n)$, and if we increase the space usage by a constant factor, we can compute a $2α$-approximation for the densest subgraph problem in time $O(n^2/\lg α)$ for any $α> 1$.

Authors: Andrew Krapivin, Benjamin Przybocki, Nicolás Sanhueza-Matamala, Bernardo Subercaseaux

We study the problem of partitioning the edges of a $d$-uniform hypergraph $H$ into a family $F$ of complete $d$-partite hypergraphs ($d$-cliques). We show that there is a partition $F$ in which every vertex $v \in V(H)$ belongs to at most $(\frac{1}{d!} + o_d(1))n^{d-1}/\lg n$ members of $F$. This settles the central question of a line of research initiated by Erdős and Pyber (1997) for graphs, and more recently by Csirmaz, Ligeti, and Tardos (2014) for hypergraphs. The $d=2$ case of this theorem answers a 40-year-old question of Chung, Erdős, and Spencer (1983). An immediate corollary of our result is an improved upper bound for the maximum share size for binary secret sharing schemes on uniform hypergraphs. Building on results of Nechiporuk (1969), we prove that every graph with fixed edge density $γ\in (0,1)$ has a biclique partition of total weight at most $(\tfrac{1}{2}+o(1))\cdot h_2(γ) \frac{n^2}{\lg n}$, where $h_2$ is the binary entropy function. Our construction implies that such biclique partitions can be constructed in time $O(m)$, which answers a question of Feder and Motwani (1995) and also improves upon results of Mubayi and Turán (2010) as well as Chavan, Rabinia, Grosu, and Brocanelli (2025). Using similar techniques, we also give an $n^{1+o(1)}$ algorithm for finding a subgraph $K_{t,t}$ with $t = (1-o(1)) \fracγ{h_2(γ)} \lg n$. Our results show that biclique partitions are information-theoretically optimal representations for graphs at every fixed density. We show that with this succinct representation one can answer independent set queries and cut queries in time $O(n^2/ \lg n)$, and if we increase the space usage by a constant factor, we can compute a $2α$-approximation for the densest subgraph problem in time $O(n^2/\lg α)$ for any $α> 1$.

Monday, November 17

Guest post: FOCS 2025, and Job Openings spreadsheet

from TCS+ Seminar Series

A guest post by the General Chair of FOCS 2025, Clément Canonne. FOCS 2025 is less than a month away, with an early bird deadline coming up on November 21, and the summer sun already shining in Sydney. Below are a few important announcements, not necessarily only relevant to attendees (though we do hope you […]

A guest post by the General Chair of FOCS 2025, Clément Canonne.

FOCS 2025 is less than a month away, with an early bird deadline coming up on November 21, and the summer sun already shining in Sydney. Below are a few important announcements, not necessarily only relevant to attendees (though we do hope you will attend!):

Graduating Bits (afternoon, Sunday – Day 1): Following the (now) long-standing tradition initiated at ITCS, graduating PhD students and postdoctoral researchers will give short 2-minute presentations introducing themselves, their research interests and plans. Many of these junior researchers will be looking for positions in the upcoming year. Please see 🔗 this page to register as a presenter, and for their slides and information after the event.

Job Openings Spreadsheet (asynchronous): The 🔗 Job Openings page has a spreadsheet with information about job openings in Theoretical CS, to help graduating students and junior researchers. If your group plans to hire a postdoc, or a researcher (e.g., in industry), you can use the following form to provide information about the position: https://forms.gle/JSCaHfV5pVGCLXhM7.

Job Applicants Form: To complement the above two initiatives, If you are on the job market, you can provide your information at this form: https://forms.gle/ipAQg4cmBsmj3gwu6 . This will not be made public, and only shared with the FOCS 2025 sponsors and relevant parties/attendees for them to contact you during FOCS.

Visa: if you need a visa letter (esp. as a presenter), please reach out to the general chair ASAP for a letter: clement.canonne@sydney.edu.au. You can check how to obtain a visa, and which visa to apply to (including ETA for American citizens) on this government website.

Childcare support: To reduce barriers to attendance, we will try to facilitate childcare for the duration of the conference. We have some limited funding, thanks to our Australian sponsors, to provide partial childcare support to attendees, with as little red tape or administrative overhead on your end as possible: see this page for details

Satellite events: there will be two free “unaffiliated” satellite events, both held at the University of Sydney, prior and after the conference: “A Celebration of TCS,” December 11–13, and “Trends in Approximation and Online Algorithms,” December 18–19. See this page for more details.

By plustcs

postdoc at ETH Zurich (apply by December 7, 2025)

from CCI: jobs

A full-time postdoc position is available in Karl Bringmann’s research group on Fine-Grained Algorithms & Complexity, which will be established at ETH Zurich in January 2026. This position may optionally be joint with Danupon Nanongkai’s group at Max Planck Institute for Informatics, spending one year at ETH and one year at MPI. Website: people.mpi-inf.mpg.de/~kbringma/jobopenings.html Email: […]

A full-time postdoc position is available in Karl Bringmann’s research group on Fine-Grained Algorithms & Complexity, which will be established at ETH Zurich in January 2026.

This position may optionally be joint with Danupon Nanongkai’s group at Max Planck Institute for Informatics, spending one year at ETH and one year at MPI.

Website: https://people.mpi-inf.mpg.de/~kbringma/jobopenings.html
Email: karl.bringmann@inf.ethz.ch

By shacharlovett

Postdoc at Max Planck Institute for Informatics (apply by December 15, 2025)

from CCI: jobs

Max Planck Institute for Informatics seeks several Postdoc and Group Leaders for the Algorithms and Complexity Department directed by Danupon Nanongkai. We are looking for applicants from all areas of algorithms and complexity. Postdocs are free in their choice of research topics and collaborators. Group leaders are expected to independently build and lead research groups. […]

Max Planck Institute for Informatics seeks several Postdoc and Group Leaders for the Algorithms and Complexity Department directed by Danupon Nanongkai.

We are looking for applicants from all areas of algorithms and complexity. Postdocs are free in their choice of research topics and collaborators. Group leaders are expected to independently build and lead research groups.

Website: https://www.mpi-inf.mpg.de/d1/offers/postdoc
Email: join-d1@mpi-inf.mpg.de

By shacharlovett

Arity hierarchies for quantifiers closed under partial polymorphisms

from arXiv: Computational Complexity

Authors: Anuj Dawar, Lauri Hella, Benedikt Pago

We investigate the expressive power of generalized quantifiers closed under partial polymorphism conditions motivated by the study of constraint satisfaction problems. We answer a number of questions arising from the work of Dawar and Hella (CSL 2024) where such quantifiers were introduced. For quantifiers closed under partial near-unanimity polymorphisms, we establish hierarchy results clarifying the interplay between the arity of the polymorphisms and of the quantifiers: The expressive power of $(\ell+1)$-ary quantifiers closed under $\ell$-ary partial near-unanimity polymorphisms is strictly between the class of all quantifiers of arity $\ell-1$ and $\ell$. We also establish an infinite hierarchy based on the arity of quantifiers with a fixed arity of partial near-unanimity polymorphisms. Finally, we prove inexpressiveness results for quantifiers with a partial Maltsev polymorphism. The separation results are proved using novel algebraic constructions in the style of Cai-Fürer-Immerman and the quantifier pebble games of Dawar and Hella (2024).

Authors: Anuj Dawar, Lauri Hella, Benedikt Pago

We investigate the expressive power of generalized quantifiers closed under partial polymorphism conditions motivated by the study of constraint satisfaction problems. We answer a number of questions arising from the work of Dawar and Hella (CSL 2024) where such quantifiers were introduced. For quantifiers closed under partial near-unanimity polymorphisms, we establish hierarchy results clarifying the interplay between the arity of the polymorphisms and of the quantifiers: The expressive power of $(\ell+1)$-ary quantifiers closed under $\ell$-ary partial near-unanimity polymorphisms is strictly between the class of all quantifiers of arity $\ell-1$ and $\ell$. We also establish an infinite hierarchy based on the arity of quantifiers with a fixed arity of partial near-unanimity polymorphisms. Finally, we prove inexpressiveness results for quantifiers with a partial Maltsev polymorphism. The separation results are proved using novel algebraic constructions in the style of Cai-Fürer-Immerman and the quantifier pebble games of Dawar and Hella (2024).

Fast polynomial computations with space constraints

from arXiv: Computational Complexity

Authors: Bruno Grenet

The works presented in this habilitation concern the algorithmics of polynomials. This is a central topic in computer algebra, with numerous applications both within and outside the field - cryptography, error-correcting codes, etc. For many problems, extremely efficient algorithms have been developed since the 1960s. Here, we are interested in how this efficiency is affected when space constraints are introduced. The first part focuses on the time-space complexity of fundamental polynomial computations - multiplication, division, interpolation, ... While naive algorithms typically have constant space complexity, fast algorithms generally require linear space. We develop algorithms that are both time- and space-efficient. This leads us to discuss and refine definitions of space complexity for function computation. In the second part, the space constraints are put on the inputs and outputs. Algorithms for polynomials assume in general a dense representation for the polynomials, that is storing the full list of coefficients. In contrast, we work with sparse polynomials, in which most coefficients vanish. In particular, we describe the first quasi-linear algorithm for sparse interpolation, which plays a role analogous to the Fast Fourier Transform in the sparse settings. We also explore computationally hard problems concerning divisibility and factorization of sparse polynomials.

Authors: Bruno Grenet

The works presented in this habilitation concern the algorithmics of polynomials. This is a central topic in computer algebra, with numerous applications both within and outside the field - cryptography, error-correcting codes, etc. For many problems, extremely efficient algorithms have been developed since the 1960s. Here, we are interested in how this efficiency is affected when space constraints are introduced. The first part focuses on the time-space complexity of fundamental polynomial computations - multiplication, division, interpolation, ... While naive algorithms typically have constant space complexity, fast algorithms generally require linear space. We develop algorithms that are both time- and space-efficient. This leads us to discuss and refine definitions of space complexity for function computation. In the second part, the space constraints are put on the inputs and outputs. Algorithms for polynomials assume in general a dense representation for the polynomials, that is storing the full list of coefficients. In contrast, we work with sparse polynomials, in which most coefficients vanish. In particular, we describe the first quasi-linear algorithm for sparse interpolation, which plays a role analogous to the Fast Fourier Transform in the sparse settings. We also explore computationally hard problems concerning divisibility and factorization of sparse polynomials.

On the Dynamics of Bounded-Degree Automata Networks

from arXiv: Computational Complexity

Authors: Julio Aracena, Florian Bridoux, Maximilien Gadouleau, Pierre Guillon, Kévin Perrot, Adrien Richard, Guillaume Theyssier

Automata networks can be seen as bare finite dynamical systems, but their growing theory has shown the importance of the underlying communication graph of such networks. This paper tackles the question of what dynamics can be realized up to isomorphism if we suppose that the communication graph has bounded degree. We prove several negative results about parameters like the number of fixed points or the rank. We also show that we can realize with degree 2 a dynamics made of a single fixed point and a cycle gathering all other configurations. However, we leave open the embarrassingly simple question of whether a dynamics consisting of a single cycle can be realized with bounded degree, although we prove that it is impossible when the network become acyclic by suppressing one node, and that realizing precisely a Gray code map is impossible with bounded degree. Finally we give bounds on the complexity of the problem of recognizing such dynamics.

Authors: Julio Aracena, Florian Bridoux, Maximilien Gadouleau, Pierre Guillon, Kévin Perrot, Adrien Richard, Guillaume Theyssier

Automata networks can be seen as bare finite dynamical systems, but their growing theory has shown the importance of the underlying communication graph of such networks. This paper tackles the question of what dynamics can be realized up to isomorphism if we suppose that the communication graph has bounded degree. We prove several negative results about parameters like the number of fixed points or the rank. We also show that we can realize with degree 2 a dynamics made of a single fixed point and a cycle gathering all other configurations. However, we leave open the embarrassingly simple question of whether a dynamics consisting of a single cycle can be realized with bounded degree, although we prove that it is impossible when the network become acyclic by suppressing one node, and that realizing precisely a Gray code map is impossible with bounded degree. Finally we give bounds on the complexity of the problem of recognizing such dynamics.

Structure-Aware Encodings of Argumentation Properties for Clique-width

from arXiv: Computational Complexity

Authors: Yasir Mahmood, Markus Hecher, Johanna Groven, Johannes K. Fichte

Structural measures of graphs, such as treewidth, are central tools in computational complexity resulting in efficient algorithms when exploiting the parameter. It is even known that modern SAT solvers work efficiently on instances of small treewidth. Since these solvers are widely applied, research interests in compact encodings into (Q)SAT for solving and to understand encoding limitations. Even more general is the graph parameter clique-width, which unlike treewidth can be small for dense graphs. Although algorithms are available for clique-width, little is known about encodings. We initiate the quest to understand encoding capabilities with clique-width by considering abstract argumentation, which is a robust framework for reasoning with conflicting arguments. It is based on directed graphs and asks for computationally challenging properties, making it a natural candidate to study computational properties. We design novel reductions from argumentation problems to (Q)SAT. Our reductions linearly preserve the clique-width, resulting in directed decomposition-guided (DDG) reductions. We establish novel results for all argumentation semantics, including counting. Notably, the overhead caused by our DDG reductions cannot be significantly improved under reasonable assumptions.

Authors: Yasir Mahmood, Markus Hecher, Johanna Groven, Johannes K. Fichte

Structural measures of graphs, such as treewidth, are central tools in computational complexity resulting in efficient algorithms when exploiting the parameter. It is even known that modern SAT solvers work efficiently on instances of small treewidth. Since these solvers are widely applied, research interests in compact encodings into (Q)SAT for solving and to understand encoding limitations. Even more general is the graph parameter clique-width, which unlike treewidth can be small for dense graphs. Although algorithms are available for clique-width, little is known about encodings. We initiate the quest to understand encoding capabilities with clique-width by considering abstract argumentation, which is a robust framework for reasoning with conflicting arguments. It is based on directed graphs and asks for computationally challenging properties, making it a natural candidate to study computational properties. We design novel reductions from argumentation problems to (Q)SAT. Our reductions linearly preserve the clique-width, resulting in directed decomposition-guided (DDG) reductions. We establish novel results for all argumentation semantics, including counting. Notably, the overhead caused by our DDG reductions cannot be significantly improved under reasonable assumptions.

Parameterized complexity of the f-Critical Set problem

from arXiv: Computational Complexity

Authors: Thiago Marcilon, Murillo Inácio da Costa Silva

Given a graph $G=(V,E)$ and a function $f:V(G) \rightarrow \mathbb{N}$, an $f$-reversible process on $G$ is a dynamical system such that, given an initial vertex labeling $c_0 : V(G) \rightarrow \{0,1\}$, every vertex $v$ changes its label if and only if it has at least $f(v)$ neighbors with the opposite label. The updates occur synchronously in discrete time steps $t=0,1,2,\ldots$. An $f$-critical set of $G$ is a subset of vertices of $G$ whose initial label is $1$ such that, in an $f$-reversible process on $G$, all vertices reach label $1$ within one time step and then remain unchanged. The critical set number $r^c_f(G)$ is the minimum size of an $f$-critical set of $G$. Given a graph $G$, a threshold function $f$, and an integer $k$, the $f$-Critical Set problem asks whether $r^c_f(G) \leq k$. We prove that this problem is NP-complete for planar subcubic bipartite graphs and $m(f) \leq 2$, where $m(f)$ is the largest value of $f(v)$ over all $v \in V(G)$, and is W[1]-hard when parameterized by the treewidth $tw(G)$ of $G$. Additionally, we show that the problem is in FPT when parameterized by $tw(G)+m(f)$, $tw(G)+Δ(G)$, and $k$, where $Δ(G)$ denotes the maximum degree of a vertex in $G$. Finally, we present two kernels of sizes $O(k \cdot m(f))$ and $O(k \cdot Δ(G))$.

Authors: Thiago Marcilon, Murillo Inácio da Costa Silva

Given a graph $G=(V,E)$ and a function $f:V(G) \rightarrow \mathbb{N}$, an $f$-reversible process on $G$ is a dynamical system such that, given an initial vertex labeling $c_0 : V(G) \rightarrow \{0,1\}$, every vertex $v$ changes its label if and only if it has at least $f(v)$ neighbors with the opposite label. The updates occur synchronously in discrete time steps $t=0,1,2,\ldots$. An $f$-critical set of $G$ is a subset of vertices of $G$ whose initial label is $1$ such that, in an $f$-reversible process on $G$, all vertices reach label $1$ within one time step and then remain unchanged. The critical set number $r^c_f(G)$ is the minimum size of an $f$-critical set of $G$. Given a graph $G$, a threshold function $f$, and an integer $k$, the $f$-Critical Set problem asks whether $r^c_f(G) \leq k$. We prove that this problem is NP-complete for planar subcubic bipartite graphs and $m(f) \leq 2$, where $m(f)$ is the largest value of $f(v)$ over all $v \in V(G)$, and is W[1]-hard when parameterized by the treewidth $tw(G)$ of $G$. Additionally, we show that the problem is in FPT when parameterized by $tw(G)+m(f)$, $tw(G)+Δ(G)$, and $k$, where $Δ(G)$ denotes the maximum degree of a vertex in $G$. Finally, we present two kernels of sizes $O(k \cdot m(f))$ and $O(k \cdot Δ(G))$.

Effective Resistance in Simplicial Complexes as Bilinear Forms: Generalizations and Properties

from arXiv: Computational Geometry

Authors: Inés García-Redondo, Claudia Landi, Sarah Percival, Anda Skeja, Bei Wang, Ling Zhou

The concept of effective resistance, originally introduced in electrical circuit theory, has been extended to the setting of graphs by interpreting each edge as a resistor. In this context, the effective resistance between two vertices quantifies the total opposition to current flow when a unit current is injected at one vertex and extracted at the other. Beyond its physical interpretation, the effective resistance encodes rich structural and geometric information about the underlying graph: it defines a metric on the vertex set, relates to the topology of the graph through Foster's theorem, and determines the probability of an edge appearing in a random spanning tree. Generalizations of effective resistance to simplicial complexes have been proposed in several forms, often formulated as matrix products of standard operators associated with the complex. In this paper, we present a twofold generalization of the effective resistance. First, we introduce a novel, basis-independent bilinear form, derived from an algebraic reinterpretation of circuit theory, that extends the classical effective resistance from graphs. Second, we extend this bilinear form to simplices, chains, and cochains within simplicial complexes. This framework subsumes and unifies all existing matrix-based formulations of effective resistance. Moreover, we establish higher-order analogues of several fundamental properties known in the graph case: (i) we prove that effective resistance induces a pseudometric on the space of chains and a metric on the space of cycles, and (ii) we provide a generalization of Foster's Theorem to simplicial complexes.

Authors: Inés García-Redondo, Claudia Landi, Sarah Percival, Anda Skeja, Bei Wang, Ling Zhou

The concept of effective resistance, originally introduced in electrical circuit theory, has been extended to the setting of graphs by interpreting each edge as a resistor. In this context, the effective resistance between two vertices quantifies the total opposition to current flow when a unit current is injected at one vertex and extracted at the other. Beyond its physical interpretation, the effective resistance encodes rich structural and geometric information about the underlying graph: it defines a metric on the vertex set, relates to the topology of the graph through Foster's theorem, and determines the probability of an edge appearing in a random spanning tree. Generalizations of effective resistance to simplicial complexes have been proposed in several forms, often formulated as matrix products of standard operators associated with the complex. In this paper, we present a twofold generalization of the effective resistance. First, we introduce a novel, basis-independent bilinear form, derived from an algebraic reinterpretation of circuit theory, that extends the classical effective resistance from graphs. Second, we extend this bilinear form to simplices, chains, and cochains within simplicial complexes. This framework subsumes and unifies all existing matrix-based formulations of effective resistance. Moreover, we establish higher-order analogues of several fundamental properties known in the graph case: (i) we prove that effective resistance induces a pseudometric on the space of chains and a metric on the space of cycles, and (ii) we provide a generalization of Foster's Theorem to simplicial complexes.

A number-theoretic conjecture implying faster algorithms for polynomial factorization and integer factorization

from arXiv: Data Structures and Algorithms

Authors: Chris Umans, Siki Wang

The fastest known algorithm for factoring a degree $n$ univariate polynomial over a finite field $\mathbb{F}_q$ runs in time $O(n^{3/2 + o(1)}\text{polylog } q)$, and there is a reason to believe that the $3/2$ exponent represents a ''barrier'' inherent in algorithms that employ a so-called baby-steps-giant-steps strategy. In this paper, we propose a new strategy with the potential to overcome the $3/2$ barrier. In doing so we are led to a number-theoretic conjecture, one form of which is that there are sets $S, T$ of cardinality $n^β$, consisting of positive integers of magnitude at most $\exp(n^α)$, such that every integer $i \in [n]$ divides $s-t$ for some $s \in S, t \in T$. Achieving $α+ β\le 1 + o(1)$ is trivial; we show that achieving $α, β< 1/2$ (together with an assumption that $S, T$ are structured) implies an improvement to the exponent 3/2 for univariate polynomial factorization. Achieving $α= β= 1/3$ is best-possible and would imply an exponent 4/3 algorithm for univariate polynomial factorization. Interestingly, a second consequence would be a reduction of the current-best exponent for deterministic (exponential) algorithms for factoring integers, from $1/5$ to $1/6$.

Authors: Chris Umans, Siki Wang

The fastest known algorithm for factoring a degree $n$ univariate polynomial over a finite field $\mathbb{F}_q$ runs in time $O(n^{3/2 + o(1)}\text{polylog } q)$, and there is a reason to believe that the $3/2$ exponent represents a ''barrier'' inherent in algorithms that employ a so-called baby-steps-giant-steps strategy. In this paper, we propose a new strategy with the potential to overcome the $3/2$ barrier. In doing so we are led to a number-theoretic conjecture, one form of which is that there are sets $S, T$ of cardinality $n^β$, consisting of positive integers of magnitude at most $\exp(n^α)$, such that every integer $i \in [n]$ divides $s-t$ for some $s \in S, t \in T$. Achieving $α+ β\le 1 + o(1)$ is trivial; we show that achieving $α, β< 1/2$ (together with an assumption that $S, T$ are structured) implies an improvement to the exponent 3/2 for univariate polynomial factorization. Achieving $α= β= 1/3$ is best-possible and would imply an exponent 4/3 algorithm for univariate polynomial factorization. Interestingly, a second consequence would be a reduction of the current-best exponent for deterministic (exponential) algorithms for factoring integers, from $1/5$ to $1/6$.

Faster MAX-CUT on Bounded Threshold Rank Graphs

from arXiv: Data Structures and Algorithms

Authors: Prashanti Anderson, Samuel B. Hopkins, Amit Rajaraman, David Steurer

We design new algorithms for approximating 2CSPs on graphs with bounded threshold rank, that is, whose normalized adjacency matrix has few eigenvalues larger than $\varepsilon$, smaller than $-\varepsilon$, or both. Unlike on worst-case graphs, 2CSPs on bounded threshold rank graphs can be $(1+O(\varepsilon))$-approximated efficiently. Prior approximation algorithms for this problem run in time exponential in the threshold rank and $1/\varepsilon$. Our algorithm has running time which is polynomial in $1/\varepsilon$ and exponential in the threshold rank of the label-extended graph, and near-linear in the input size. As a consequence, we obtain the first $(1+O(\varepsilon))$ approximation for MAX-CUT on bounded threshold rank graphs running in $\mathrm{poly}(1/\varepsilon)$ time. We also improve the state-of-the-art running time for 2CSPs on bounded threshold-rank graphs from polynomial in input size to near-linear via a new comparison inequality between the threshold rank of the label-extended graph and base graph. Our algorithm is a simple yet novel combination of subspace enumeration and semidefinite programming.

Authors: Prashanti Anderson, Samuel B. Hopkins, Amit Rajaraman, David Steurer

We design new algorithms for approximating 2CSPs on graphs with bounded threshold rank, that is, whose normalized adjacency matrix has few eigenvalues larger than $\varepsilon$, smaller than $-\varepsilon$, or both. Unlike on worst-case graphs, 2CSPs on bounded threshold rank graphs can be $(1+O(\varepsilon))$-approximated efficiently. Prior approximation algorithms for this problem run in time exponential in the threshold rank and $1/\varepsilon$. Our algorithm has running time which is polynomial in $1/\varepsilon$ and exponential in the threshold rank of the label-extended graph, and near-linear in the input size. As a consequence, we obtain the first $(1+O(\varepsilon))$ approximation for MAX-CUT on bounded threshold rank graphs running in $\mathrm{poly}(1/\varepsilon)$ time. We also improve the state-of-the-art running time for 2CSPs on bounded threshold-rank graphs from polynomial in input size to near-linear via a new comparison inequality between the threshold rank of the label-extended graph and base graph. Our algorithm is a simple yet novel combination of subspace enumeration and semidefinite programming.

Learning and Testing Convex Functions

from arXiv: Data Structures and Algorithms

Authors: Renato Ferreira Pinto, Cassandra Marcussen, Elchanan Mossel, Shivam Nadimpalli

We consider the problems of \emph{learning} and \emph{testing} real-valued convex functions over Gaussian space. Despite the extensive study of function convexity across mathematics, statistics, and computer science, its learnability and testability have largely been examined only in discrete or restricted settings -- typically with respect to the Hamming distance, which is ill-suited for real-valued functions. In contrast, we study these problems in high dimensions under the standard Gaussian measure, assuming sample access to the function and a mild smoothness condition, namely Lipschitzness. A smoothness assumption is natural and, in fact, necessary even in one dimension: without it, convexity cannot be inferred from finitely many samples. As our main results, we give: - Learning Convex Functions: An agnostic proper learning algorithm for Lipschitz convex functions that achieves error $\varepsilon$ using $n^{O(1/\varepsilon^2)}$ samples, together with a complementary lower bound of $n^{\mathrm{poly}(1/\varepsilon)}$ samples in the \emph{correlational statistical query (CSQ)} model. - Testing Convex Functions: A tolerant (two-sided) tester for convexity of Lipschitz functions with the same sample complexity (as a corollary of our learning result), and a one-sided tester (which never rejects convex functions) using $O(\sqrt{n}/\varepsilon)^n$ samples.

Authors: Renato Ferreira Pinto, Cassandra Marcussen, Elchanan Mossel, Shivam Nadimpalli

We consider the problems of \emph{learning} and \emph{testing} real-valued convex functions over Gaussian space. Despite the extensive study of function convexity across mathematics, statistics, and computer science, its learnability and testability have largely been examined only in discrete or restricted settings -- typically with respect to the Hamming distance, which is ill-suited for real-valued functions. In contrast, we study these problems in high dimensions under the standard Gaussian measure, assuming sample access to the function and a mild smoothness condition, namely Lipschitzness. A smoothness assumption is natural and, in fact, necessary even in one dimension: without it, convexity cannot be inferred from finitely many samples. As our main results, we give: - Learning Convex Functions: An agnostic proper learning algorithm for Lipschitz convex functions that achieves error $\varepsilon$ using $n^{O(1/\varepsilon^2)}$ samples, together with a complementary lower bound of $n^{\mathrm{poly}(1/\varepsilon)}$ samples in the \emph{correlational statistical query (CSQ)} model. - Testing Convex Functions: A tolerant (two-sided) tester for convexity of Lipschitz functions with the same sample complexity (as a corollary of our learning result), and a one-sided tester (which never rejects convex functions) using $O(\sqrt{n}/\varepsilon)^n$ samples.

Public Goods Games in Directed Networks with Constraints on Sharing

from arXiv: Data Structures and Algorithms

Authors: Argyrios Deligkas, Gregory Gutin, Mark Jones, Philip R. Neary, Anders Yeo

In a public goods game, every player chooses whether or not to buy a good that all neighboring players will have access to. We consider a setting in which the good is indivisible, neighboring players are out-neighbors in a directed graph, and there is a capacity constraint on their number, k, that can benefit from the good. This means that each player makes a two-pronged decision: decide whether or not to buy and, conditional on buying, choose which k out-neighbors to share access. We examine both pure and mixed Nash equilibria in the model from the perspective of existence, computation, and efficiency. We perform a comprehensive study for these three dimensions with respect to both sharing capacity (k) and the network structure (the underlying directed graph), and establish sharp complexity dichotomies for each.

Authors: Argyrios Deligkas, Gregory Gutin, Mark Jones, Philip R. Neary, Anders Yeo

In a public goods game, every player chooses whether or not to buy a good that all neighboring players will have access to. We consider a setting in which the good is indivisible, neighboring players are out-neighbors in a directed graph, and there is a capacity constraint on their number, k, that can benefit from the good. This means that each player makes a two-pronged decision: decide whether or not to buy and, conditional on buying, choose which k out-neighbors to share access. We examine both pure and mixed Nash equilibria in the model from the perspective of existence, computation, and efficiency. We perform a comprehensive study for these three dimensions with respect to both sharing capacity (k) and the network structure (the underlying directed graph), and establish sharp complexity dichotomies for each.

Linear-Space Extragradient Methods for Fast, Large-Scale Optimal Transport

from arXiv: Data Structures and Algorithms

Authors: Matthew X. Burns, Jiaming Liang

Optimal transport (OT) and its entropy-regularized form (EOT) have become increasingly prominent computational problems, with applications in machine learning and statistics. Recent years have seen a commensurate surge in first-order methods aiming to improve the complexity of large-scale (E)OT. However, there has been a consistent tradeoff: attaining state-of-the-art rates requires $\mathcal{O}(n^2)$ storage to enable ergodic primal averaging. In this work, we demonstrate that recently proposed primal-dual extragradient methods (PDXG) can be implemented entirely in the dual with $\mathcal{O}(n)$ storage. Additionally, we prove that regularizing the reformulated OT problem is equivalent to EOT with extensions to entropy-regularized barycenter problems, further widening the applications of the proposed method. The proposed dual-only extragradient method (DXG) is the first algorithm to achieve $\mathcal{O}(n^2\varepsilon^{-1})$ complexity for $\varepsilon$-approximate OT with $\mathcal{O}(n)$ memory. Numerical experiments demonstrate that the dual extragradient method scales favorably in non/weakly-regularized regimes compared to existing algorithms, though future work is needed to improve performance in certain problem classes.

Authors: Matthew X. Burns, Jiaming Liang

Optimal transport (OT) and its entropy-regularized form (EOT) have become increasingly prominent computational problems, with applications in machine learning and statistics. Recent years have seen a commensurate surge in first-order methods aiming to improve the complexity of large-scale (E)OT. However, there has been a consistent tradeoff: attaining state-of-the-art rates requires $\mathcal{O}(n^2)$ storage to enable ergodic primal averaging. In this work, we demonstrate that recently proposed primal-dual extragradient methods (PDXG) can be implemented entirely in the dual with $\mathcal{O}(n)$ storage. Additionally, we prove that regularizing the reformulated OT problem is equivalent to EOT with extensions to entropy-regularized barycenter problems, further widening the applications of the proposed method. The proposed dual-only extragradient method (DXG) is the first algorithm to achieve $\mathcal{O}(n^2\varepsilon^{-1})$ complexity for $\varepsilon$-approximate OT with $\mathcal{O}(n)$ memory. Numerical experiments demonstrate that the dual extragradient method scales favorably in non/weakly-regularized regimes compared to existing algorithms, though future work is needed to improve performance in certain problem classes.

Improved Differentially Private Algorithms for Rank Aggregation

from arXiv: Data Structures and Algorithms

Authors: Quentin Hillebrand, Pasin Manurangsi, Vorapong Suppakitpaisarn, Phanu Vajanopath

Rank aggregation is a task of combining the rankings of items from multiple users into a single ranking that best represents the users' rankings. Alabi et al. (AAAI'22) presents differentially-private (DP) polynomial-time approximation schemes (PTASes) and $5$-approximation algorithms with certain additive errors for the Kemeny rank aggregation problem in both central and local models. In this paper, we present improved DP PTASes with smaller additive error in the central model. Furthermore, we are first to study the footrule rank aggregation problem under DP. We give a near-optimal algorithm for this problem; as a corollary, this leads to 2-approximation algorithms with the same additive error as the $5$-approximation algorithms of Alabi et al. for the Kemeny rank aggregation problem in both central and local models.

Authors: Quentin Hillebrand, Pasin Manurangsi, Vorapong Suppakitpaisarn, Phanu Vajanopath

Rank aggregation is a task of combining the rankings of items from multiple users into a single ranking that best represents the users' rankings. Alabi et al. (AAAI'22) presents differentially-private (DP) polynomial-time approximation schemes (PTASes) and $5$-approximation algorithms with certain additive errors for the Kemeny rank aggregation problem in both central and local models. In this paper, we present improved DP PTASes with smaller additive error in the central model. Furthermore, we are first to study the footrule rank aggregation problem under DP. We give a near-optimal algorithm for this problem; as a corollary, this leads to 2-approximation algorithms with the same additive error as the $5$-approximation algorithms of Alabi et al. for the Kemeny rank aggregation problem in both central and local models.

An Efficient Algorithm for Minimizing Ordered Norms in Fractional Load Balancing

from arXiv: Data Structures and Algorithms

Authors: Daniel Blankenburg, Antonia Ellerbrock, Thomas Kesselheim, Jens Vygen

We study the problem of minimizing an ordered norm of a load vector (indexed by a set of $d$ resources), where a finite number $n$ of customers $c$ contribute to the load of each resource by choosing a solution $x_c$ in a convex set $X_c \subseteq \mathbb{R}^d_{\geq 0}$; so we minimize $||\sum_{c}x_c||$ for some fixed ordered norm $||\cdot||$. We devise a randomized algorithm that computes a $(1+\varepsilon)$-approximate solution to this problem and makes, with high probability, $\mathcal{O}((n+d) (\varepsilon^{-2}+\log\log d)\log (n+d))$ calls to oracles that minimize linear functions (with non-negative coefficients) over $X_c$. While this has been known for the $\ell_{\infty}$ norm via the multiplicative weights update method, existing proof techniques do not extend to arbitrary ordered norms. Our algorithm uses a resource price mechanism that is motivated by the follow-the-regularized-leader paradigm, and is expressed by smooth approximations of ordered norms. We need and show that these have non-trivial stability properties, which may be of independent interest. For each customer, we define dynamic cost budgets, which evolve throughout the algorithm, to determine the allowed step sizes. This leads to non-uniform updates and may even reject certain oracle solutions. Using non-uniform sampling together with a martingale argument, we can guarantee sufficient expected progress in each iteration, and thus bound the total number of oracle calls with high probability.

Authors: Daniel Blankenburg, Antonia Ellerbrock, Thomas Kesselheim, Jens Vygen

We study the problem of minimizing an ordered norm of a load vector (indexed by a set of $d$ resources), where a finite number $n$ of customers $c$ contribute to the load of each resource by choosing a solution $x_c$ in a convex set $X_c \subseteq \mathbb{R}^d_{\geq 0}$; so we minimize $||\sum_{c}x_c||$ for some fixed ordered norm $||\cdot||$. We devise a randomized algorithm that computes a $(1+\varepsilon)$-approximate solution to this problem and makes, with high probability, $\mathcal{O}((n+d) (\varepsilon^{-2}+\log\log d)\log (n+d))$ calls to oracles that minimize linear functions (with non-negative coefficients) over $X_c$. While this has been known for the $\ell_{\infty}$ norm via the multiplicative weights update method, existing proof techniques do not extend to arbitrary ordered norms. Our algorithm uses a resource price mechanism that is motivated by the follow-the-regularized-leader paradigm, and is expressed by smooth approximations of ordered norms. We need and show that these have non-trivial stability properties, which may be of independent interest. For each customer, we define dynamic cost budgets, which evolve throughout the algorithm, to determine the allowed step sizes. This leads to non-uniform updates and may even reject certain oracle solutions. Using non-uniform sampling together with a martingale argument, we can guarantee sufficient expected progress in each iteration, and thus bound the total number of oracle calls with high probability.

TSP integrality gap via 2-edge-connected multisubgraph problem under coincident IP optima

from arXiv: Data Structures and Algorithms

Authors: Toshiaki Yamanaka

Determining the integrality gap of the linear programming (LP) relaxation of the metric traveling salesman problem (TSP) remains a long-standing open problem. We introduce a transfer principle: when the integer optimum of the 2-edge-connected multisubgraph problem (2ECM) is a unique Hamiltonian cycle $T$, any $α$-approximation algorithm for 2ECM that outputs a Hamiltonian cycle yields an $α$-approximation for TSP. We further develop a cut-margin uniqueness framework that certifies $T$ as the unique integer optimum for both problems and is stable under $\ell_\infty$-bounded perturbations. We show that, if instances exist where the 2ECM has both a unique Hamiltonian cycle integer optimum and a half-integral LP solution, then the TSP integrality gap is at most 4/3 by the algorithm of Boyd et al. (SIAM Journal on Discrete Mathematics 36:1730--1747, 2022). Constructing such instances remains an open problem.

Authors: Toshiaki Yamanaka

Determining the integrality gap of the linear programming (LP) relaxation of the metric traveling salesman problem (TSP) remains a long-standing open problem. We introduce a transfer principle: when the integer optimum of the 2-edge-connected multisubgraph problem (2ECM) is a unique Hamiltonian cycle $T$, any $α$-approximation algorithm for 2ECM that outputs a Hamiltonian cycle yields an $α$-approximation for TSP. We further develop a cut-margin uniqueness framework that certifies $T$ as the unique integer optimum for both problems and is stable under $\ell_\infty$-bounded perturbations. We show that, if instances exist where the 2ECM has both a unique Hamiltonian cycle integer optimum and a half-integral LP solution, then the TSP integrality gap is at most 4/3 by the algorithm of Boyd et al. (SIAM Journal on Discrete Mathematics 36:1730--1747, 2022). Constructing such instances remains an open problem.

R-enum Revisited: Speedup and Extension for Context-Sensitive Repeats and Net Frequencies

from arXiv: Data Structures and Algorithms

Authors: Kotaro Kimura, Tomohiro I

Nishimoto and Tabei [CPM, 2021] proposed r-enum, an algorithm to enumerate various characteristic substrings, including maximal repeats, in a string $T$ of length $n$ in $O(r)$ words of compressed working space, where $r \le n$ is the number of runs in the Burrows-Wheeler transform (BWT) of $T$. Given the run-length encoded BWT (RLBWT) of $T$, r-enum runs in $O(n\log\log_{w}(n/r))$ time in addition to the time linear to the number of output strings, where $w=Θ(\log n)$ is the word size. In this paper, we improve the $O(n\log\log_{w}(n/r))$ term to $O(n)$. We also extend r-enum to compute other context-sensitive repeats such as near-supermaximal repeats (NSMRs) and supermaximal repeats, and the context diversity for every maximal repeat in the same complexities. Furthermore, we study the occurrences that witness NSMRs, which have recently attracted attention under the name of net occurrences: An occurrence of a repeat is called a net occurrence if it is not covered by another repeat, and the net frequency of a repeat is the number of its net occurrences. With this terminology, an NSMR is defined to be a repeat with a positive net frequency. Given the RLBWT of $T$, we show how to compute the set $S^{nsmr}$ of all NSMRs in $T$ together with their net frequency/occurrences in $O(n)$ time and $O(r)$ space. We also show that an $O(r)$-space data structure can be built from the RLBWT to support queries of computing the net frequency of any query pattern $P$ in $O(|P|)$ time. The data structure is built in $O(r)$ space and in $O(n)$ time with high probability or deterministic $O(n+|S^{nsmr}|\log\log\min(σ,|S^{nsmr}|))$ time, where $σ\le r$ is the alphabet size of $T$. To achieve this, we prove that the total number of net occurrences is less than $2r$. We also get a new upper bound $2r$ of the number of minimal unique substrings in $T$, which may be of independent interest.

Authors: Kotaro Kimura, Tomohiro I

Nishimoto and Tabei [CPM, 2021] proposed r-enum, an algorithm to enumerate various characteristic substrings, including maximal repeats, in a string $T$ of length $n$ in $O(r)$ words of compressed working space, where $r \le n$ is the number of runs in the Burrows-Wheeler transform (BWT) of $T$. Given the run-length encoded BWT (RLBWT) of $T$, r-enum runs in $O(n\log\log_{w}(n/r))$ time in addition to the time linear to the number of output strings, where $w=Θ(\log n)$ is the word size. In this paper, we improve the $O(n\log\log_{w}(n/r))$ term to $O(n)$. We also extend r-enum to compute other context-sensitive repeats such as near-supermaximal repeats (NSMRs) and supermaximal repeats, and the context diversity for every maximal repeat in the same complexities. Furthermore, we study the occurrences that witness NSMRs, which have recently attracted attention under the name of net occurrences: An occurrence of a repeat is called a net occurrence if it is not covered by another repeat, and the net frequency of a repeat is the number of its net occurrences. With this terminology, an NSMR is defined to be a repeat with a positive net frequency. Given the RLBWT of $T$, we show how to compute the set $S^{nsmr}$ of all NSMRs in $T$ together with their net frequency/occurrences in $O(n)$ time and $O(r)$ space. We also show that an $O(r)$-space data structure can be built from the RLBWT to support queries of computing the net frequency of any query pattern $P$ in $O(|P|)$ time. The data structure is built in $O(r)$ space and in $O(n)$ time with high probability or deterministic $O(n+|S^{nsmr}|\log\log\min(σ,|S^{nsmr}|))$ time, where $σ\le r$ is the alphabet size of $T$. To achieve this, we prove that the total number of net occurrences is less than $2r$. We also get a new upper bound $2r$ of the number of minimal unique substrings in $T$, which may be of independent interest.

Cycle Basis Algorithms for Reducing Maximum Edge Participation

from arXiv: Data Structures and Algorithms

Authors: Fan Wang, Sandy Irani

We study the problem of constructing cycle bases of graphs with low maximum edge participation, defined as the maximum number of basis cycles that share any single edge. This quantity, though less studied than total weight or length, plays a critical role in quantum fault tolerance because it directly impacts the overhead of lattice surgery procedures used to implement an almost universal quantum gate set. Building on a recursive algorithm of Freedman and Hastings, we introduce a family of load-aware heuristics that adaptively select vertices and edges to minimize edge participation throughout the cycle basis construction. Our approach improves empirical performance on random regular graphs and on graphs derived from small quantum codes. We further analyze a simplified balls-into-bins process to establish lower bounds on edge participation. While the model differs from the cycle basis algorithm on real graphs, it captures what can be proven for our heuristics without using complex graph theoretic properties related to the distribution of cycles in the graph. Our analysis suggests that the maximum load of our heuristics grows on the order of (log n)^2. Our results indicate that careful cycle basis construction can yield significant practical benefits in the design of fault-tolerant quantum systems. This question also carries theoretical interest, as it is essentially identical to the basis number of a graph, defined as the minimum possible maximum edge participation over all cycle bases.

Authors: Fan Wang, Sandy Irani

We study the problem of constructing cycle bases of graphs with low maximum edge participation, defined as the maximum number of basis cycles that share any single edge. This quantity, though less studied than total weight or length, plays a critical role in quantum fault tolerance because it directly impacts the overhead of lattice surgery procedures used to implement an almost universal quantum gate set. Building on a recursive algorithm of Freedman and Hastings, we introduce a family of load-aware heuristics that adaptively select vertices and edges to minimize edge participation throughout the cycle basis construction. Our approach improves empirical performance on random regular graphs and on graphs derived from small quantum codes. We further analyze a simplified balls-into-bins process to establish lower bounds on edge participation. While the model differs from the cycle basis algorithm on real graphs, it captures what can be proven for our heuristics without using complex graph theoretic properties related to the distribution of cycles in the graph. Our analysis suggests that the maximum load of our heuristics grows on the order of (log n)^2. Our results indicate that careful cycle basis construction can yield significant practical benefits in the design of fault-tolerant quantum systems. This question also carries theoretical interest, as it is essentially identical to the basis number of a graph, defined as the minimum possible maximum edge participation over all cycle bases.

Beating Meet-in-the-Middle for Subset Balancing Problems

from arXiv: Data Structures and Algorithms

Authors: Tim Randolph, Karol Węgrzycki

We consider exact algorithms for Subset Balancing, a family of related problems that generalizes Subset Sum, Partition, and Equal Subset Sum. Specifically, given as input an integer vector $\vec{x} \in \mathbb{Z}^n$ and a constant-size coefficient set $C \subset \mathbb{Z}$, we seek a nonzero solution vector $\vec{c} \in C^n$ satisfying $\vec{c} \cdot \vec{x} = 0$. For $C = \{-d,\ldots,d\}$, $d > 1$ and $C = \{-d,\ldots,d\}\setminus\{0\}$, $d > 2$, we present algorithms that run in time $O(|C|^{(0.5 - ε)n})$ for a constant $ε> 0$ that depends only on $C$. These are the first algorithms that break the $O(|C|^{n/2})$-time ``Meet-in-the-Middle barrier'' for these coefficient sets in the worst case. This improves on the result of Chen, Jin, Randolph and Servedio (SODA 2022), who broke the Meet-in-the-Middle barrier on these coefficient sets in the average-case setting. We also improve the best exact algorithm for Equal Subset Sum (Subset Balancing with $C = \{-1,0,1\}$), due to Mucha, Nederlof, Pawlewicz, and Węgrzycki (ESA 2019), by an exponential margin. This positively answers an open question of Jin, Williams, and Zhang (ESA 2025). Our results leave two natural cases in which we cannot yet break the Meet-in-the-Middle barrier: $C = \{-2, -1, 1, 2\}$ and $C = \{-1, 1\}$ (Partition). Our results bring the representation technique of Howgrave-Graham and Joux (CRYPTO 2010) from average-case to worst-case inputs for many $C$. This requires a variety of new techniques: we present strategies for (1) achieving good ``mixing'' with worst-case inputs, (2) creating flexible input representations for coefficient sets without 0, and (3) quickly recovering compatible solution pairs from sets of vectors containing ``pseudosolution pairs''. These techniques may find application to other algorithmic problems on integer sums or be of independent interest.

Authors: Tim Randolph, Karol Węgrzycki

We consider exact algorithms for Subset Balancing, a family of related problems that generalizes Subset Sum, Partition, and Equal Subset Sum. Specifically, given as input an integer vector $\vec{x} \in \mathbb{Z}^n$ and a constant-size coefficient set $C \subset \mathbb{Z}$, we seek a nonzero solution vector $\vec{c} \in C^n$ satisfying $\vec{c} \cdot \vec{x} = 0$. For $C = \{-d,\ldots,d\}$, $d > 1$ and $C = \{-d,\ldots,d\}\setminus\{0\}$, $d > 2$, we present algorithms that run in time $O(|C|^{(0.5 - ε)n})$ for a constant $ε> 0$ that depends only on $C$. These are the first algorithms that break the $O(|C|^{n/2})$-time ``Meet-in-the-Middle barrier'' for these coefficient sets in the worst case. This improves on the result of Chen, Jin, Randolph and Servedio (SODA 2022), who broke the Meet-in-the-Middle barrier on these coefficient sets in the average-case setting. We also improve the best exact algorithm for Equal Subset Sum (Subset Balancing with $C = \{-1,0,1\}$), due to Mucha, Nederlof, Pawlewicz, and Węgrzycki (ESA 2019), by an exponential margin. This positively answers an open question of Jin, Williams, and Zhang (ESA 2025). Our results leave two natural cases in which we cannot yet break the Meet-in-the-Middle barrier: $C = \{-2, -1, 1, 2\}$ and $C = \{-1, 1\}$ (Partition). Our results bring the representation technique of Howgrave-Graham and Joux (CRYPTO 2010) from average-case to worst-case inputs for many $C$. This requires a variety of new techniques: we present strategies for (1) achieving good ``mixing'' with worst-case inputs, (2) creating flexible input representations for coefficient sets without 0, and (3) quickly recovering compatible solution pairs from sets of vectors containing ``pseudosolution pairs''. These techniques may find application to other algorithmic problems on integer sums or be of independent interest.

Discounted Cuts: A Stackelberg Approach to Network Disruption

from arXiv: Data Structures and Algorithms

Authors: Pål Grønås Drange, Fedor V. Fomin, Petr Golovach, Danil Sagunov

We study a Stackelberg variant of the classical Most Vital Links problem, modeled as a one-round adversarial game between an attacker and a defender. The attacker strategically removes up to $k$ edges from a flow network to maximally disrupt flow between a source $s$ and a sink $t$, after which the defender optimally reroutes the remaining flow. To capture this attacker--defender interaction, we introduce a new mathematical model of discounted cuts, in which the cost of a cut is evaluated by excluding its $k$ most expensive edges. This model generalizes the Most Vital Links problem and uncovers novel algorithmic and complexity-theoretic properties. We develop a unified algorithmic framework for analyzing various forms of discounted cut problems, including minimizing or maximizing the cost of a cut under discount mechanisms that exclude either the $k$ most expensive or the $k$ cheapest edges. While most variants are NP-complete on general graphs, our main result establishes polynomial-time solvability for all discounted cut problems in our framework when the input is restricted to bounded-genus graphs, a relevant class that includes many real-world networks such as transportation and infrastructure networks. With this work, we aim to open collaborative bridges between artificial intelligence, algorithmic game theory, and operations research.

Authors: Pål Grønås Drange, Fedor V. Fomin, Petr Golovach, Danil Sagunov

We study a Stackelberg variant of the classical Most Vital Links problem, modeled as a one-round adversarial game between an attacker and a defender. The attacker strategically removes up to $k$ edges from a flow network to maximally disrupt flow between a source $s$ and a sink $t$, after which the defender optimally reroutes the remaining flow. To capture this attacker--defender interaction, we introduce a new mathematical model of discounted cuts, in which the cost of a cut is evaluated by excluding its $k$ most expensive edges. This model generalizes the Most Vital Links problem and uncovers novel algorithmic and complexity-theoretic properties. We develop a unified algorithmic framework for analyzing various forms of discounted cut problems, including minimizing or maximizing the cost of a cut under discount mechanisms that exclude either the $k$ most expensive or the $k$ cheapest edges. While most variants are NP-complete on general graphs, our main result establishes polynomial-time solvability for all discounted cut problems in our framework when the input is restricted to bounded-genus graphs, a relevant class that includes many real-world networks such as transportation and infrastructure networks. With this work, we aim to open collaborative bridges between artificial intelligence, algorithmic game theory, and operations research.

Faster Algorithms for Structured Matrix Multiplication via Flip Graph Search

from arXiv: Data Structures and Algorithms

Authors: Kirill Khoruzhii, Patrick Gelß, Sebastian Pokutta

We give explicit low-rank bilinear non-commutative schemes for multiplying structured $n \times n$ matrices with $2 \leq n \leq 5$, which serve as building blocks for recursive algorithms with improved multiplicative factors in asymptotic complexity. Our schemes are discovered over $\mathbb{F}_2$ or $\mathbb{F}_3$ and lifted to $\mathbb{Z}$ or $\mathbb{Q}$. Using a flip graph search over tensor decompositions, we derive schemes for general, upper-triangular, lower-triangular, symmetric, and skew-symmetric inputs, as well as products of a structured matrix with its transpose. In particular, we obtain $4 \times 4$ rank-34 schemes: (i) multiplying a general matrix by its transpose using 10 recursive calls, improving the factor from 26/41 (0.634) to 8/13 (0.615); and (ii) multiplying an upper-triangular matrix by a general matrix using 12 recursive calls, improving the factor from 8/13 (0.615) to 22/37 (0.595). Additionally, using $\mathbb{F}_3$ flip graphs, we discover schemes over $\mathbb{Q}$ that fundamentally require the inverse of 2, including a $2 \times 2$ symmetric-symmetric multiplication of rank 5 and a $3 \times 3$ skew-symmetric-general multiplication of rank 14 (improving upon AlphaTensor's 15).

Authors: Kirill Khoruzhii, Patrick Gelß, Sebastian Pokutta

We give explicit low-rank bilinear non-commutative schemes for multiplying structured $n \times n$ matrices with $2 \leq n \leq 5$, which serve as building blocks for recursive algorithms with improved multiplicative factors in asymptotic complexity. Our schemes are discovered over $\mathbb{F}_2$ or $\mathbb{F}_3$ and lifted to $\mathbb{Z}$ or $\mathbb{Q}$. Using a flip graph search over tensor decompositions, we derive schemes for general, upper-triangular, lower-triangular, symmetric, and skew-symmetric inputs, as well as products of a structured matrix with its transpose. In particular, we obtain $4 \times 4$ rank-34 schemes: (i) multiplying a general matrix by its transpose using 10 recursive calls, improving the factor from 26/41 (0.634) to 8/13 (0.615); and (ii) multiplying an upper-triangular matrix by a general matrix using 12 recursive calls, improving the factor from 8/13 (0.615) to 22/37 (0.595). Additionally, using $\mathbb{F}_3$ flip graphs, we discover schemes over $\mathbb{Q}$ that fundamentally require the inverse of 2, including a $2 \times 2$ symmetric-symmetric multiplication of rank 5 and a $3 \times 3$ skew-symmetric-general multiplication of rank 14 (improving upon AlphaTensor's 15).

Support Recovery in One-bit Compressed Sensing with Near-Optimal Measurements and Sublinear Time

from arXiv: Data Structures and Algorithms

Authors: Xiaxin Li, Arya Mazumdar

The problem of support recovery in one-bit compressed sensing (1bCS) aim to recover the support of a signal $x\in \mathbb{R}^n$, denoted as supp$(x)$, from the observation $y=\text{sign}(Ax)$, where $A\in \mathbb{R}^{m\times n}$ is a sensing matrix and $|\text{supp}(x)|\leq k, k \ll n$. Under this setting, most preexisting works have a recovery runtime $Ω(n)$. In this paper, we propose two schemes that have sublinear $o(n)$ runtime. (1.i): For the universal exact support recovery, a scheme of $m=O(k^2\log(n/k)\log n)$ measurements and runtime $D=O(km)$. (1.ii): For the universal $ε$-approximate support recovery, the same scheme with $m=O(kε^{-1}\log(n/k)\log n)$ and runtime $D=O(ε^{-1}m)$, improving the runtime significantly with an extra $O(\log n)$ factor in the number of measurements compared to the current optimal (Matsumoto et al., 2023). (2): For the probabilistic exact support recovery in the sublinear regime, a scheme of $m:=O(k\frac{\log k}{\log\log k}\log n)$ measurements and runtime $O(m)$, with vanishing error probability, improving the recent result of Yang et al., 2025.

Authors: Xiaxin Li, Arya Mazumdar

The problem of support recovery in one-bit compressed sensing (1bCS) aim to recover the support of a signal $x\in \mathbb{R}^n$, denoted as supp$(x)$, from the observation $y=\text{sign}(Ax)$, where $A\in \mathbb{R}^{m\times n}$ is a sensing matrix and $|\text{supp}(x)|\leq k, k \ll n$. Under this setting, most preexisting works have a recovery runtime $Ω(n)$. In this paper, we propose two schemes that have sublinear $o(n)$ runtime. (1.i): For the universal exact support recovery, a scheme of $m=O(k^2\log(n/k)\log n)$ measurements and runtime $D=O(km)$. (1.ii): For the universal $ε$-approximate support recovery, the same scheme with $m=O(kε^{-1}\log(n/k)\log n)$ and runtime $D=O(ε^{-1}m)$, improving the runtime significantly with an extra $O(\log n)$ factor in the number of measurements compared to the current optimal (Matsumoto et al., 2023). (2): For the probabilistic exact support recovery in the sublinear regime, a scheme of $m:=O(k\frac{\log k}{\log\log k}\log n)$ measurements and runtime $O(m)$, with vanishing error probability, improving the recent result of Yang et al., 2025.

Deviation Dynamics in Cardinal Hedonic Games

from arXiv: Data Structures and Algorithms

Authors: Valentin Zech, Martin Bullinger

Computing stable partitions in hedonic games is a challenging task because there exist games in which stable outcomes do not exist. Even more, these No-instances can often be leveraged to prove computational hardness results. We make this impression rigorous in a dynamic model of cardinal hedonic games by providing meta theorems. These imply hardness of deciding about the possible or necessary convergence of deviation dynamics based on the mere existence of No-instances. Our results hold for additively separable, fractional, and modified fractional hedonic games (ASHGs, FHGs, and MFHGs). Moreover, they encompass essentially all reasonable stability notions based on single-agent deviations. In addition, we propose dynamics as a method to find individually rational and contractually individual stable (CIS) partitions in ASHGs. In particular, we find that CIS dynamics from the singleton partition possibly converge after a linear number of deviations but may require an exponential number of deviations in the worst case.

Authors: Valentin Zech, Martin Bullinger

Computing stable partitions in hedonic games is a challenging task because there exist games in which stable outcomes do not exist. Even more, these No-instances can often be leveraged to prove computational hardness results. We make this impression rigorous in a dynamic model of cardinal hedonic games by providing meta theorems. These imply hardness of deciding about the possible or necessary convergence of deviation dynamics based on the mere existence of No-instances. Our results hold for additively separable, fractional, and modified fractional hedonic games (ASHGs, FHGs, and MFHGs). Moreover, they encompass essentially all reasonable stability notions based on single-agent deviations. In addition, we propose dynamics as a method to find individually rational and contractually individual stable (CIS) partitions in ASHGs. In particular, we find that CIS dynamics from the singleton partition possibly converge after a linear number of deviations but may require an exponential number of deviations in the worst case.

Sunday, November 16

All in TCS at FOCS 2025 (TCS Job Openings spreadsheet) (apply by December 14, 2025)

from CCI: jobs

As part of FOCS 2025, we are maintaining an (asynchronous) spreadsheet to keep track of open positions in TCS. If your group/institution plans to hire a postdoc or a researcher, please fill the form! This information will be shared on the website, and will enable graduating students and junior researchers to learn about the job […]

As part of FOCS 2025, we are maintaining an (asynchronous) spreadsheet to keep track of open positions in TCS. If your group/institution plans to hire a postdoc or a researcher, please fill the form! This information will be shared on the website, and will enable graduating students and junior researchers to learn about the job opening, and potentially contact you during FOCS.

Website: https://focs.computer.org/2025/job-openings/
Email: clement.canonne@sydney.edu.au

By shacharlovett

TR25-182 | Proving the PCP Theorem with 1.5 proof compositions (or yet another PCP construction) | Oded Goldreich

from ECCC Papers

The original proof of the PCP Theorem composes a Reed-Muller-based PCP with itself, and then composes the resulting PCP with a Hadamard-based PCP [Arora, Lund, Motwani, Sudan and Szegedy ({\em JACM}, 1998)]. Hence, that proof applies a (general) proof composition result twice. (Dinur's alternative proof consists of logarithmically many gap amplification steps, where each step includes an operation akin to proof composition.) A recent work of Amireddy, Behera, Srinivasan, Sudan, and Willumsgaard ({\em ECCC}, TR25-165) presents a new PCP system such that composing the RM-based PCP with the new PCP yields a new proof of the PCP Theorem. Essentially, for every $\alpha>0$, they present a (direct) PCP of constant query complexity and randomness complexity $n^\alpha$, where $n$ denotes the length of the input. (In contrast, recall that the Hadamard-based PCP has quadratic randomness complexity.) We note that their construction has other merits, beyond the fact that it does not use composition. Here we present a different PCP system of constant query complexity and randomness $n^\alpha$. Essentially, we use the RM-based PCP, but with a constant number of dimensions and a large alphabet (equiv., $O(1/\alpha)$-variate polynomials over a field of size $n^{O(\alpha)}$). We then encode the field elements by the Hadamard code and incorporate a tester akin to the verifier used in the Hadamard-based PCP. Whether or not this counts as composition is debatable (and this is reflected in the current title), but for sure this is a non-generic composition that does not involve a preparatory parallelization step.

The original proof of the PCP Theorem composes a Reed-Muller-based PCP with itself, and then composes the resulting PCP with a Hadamard-based PCP [Arora, Lund, Motwani, Sudan and Szegedy ({\em JACM}, 1998)]. Hence, that proof applies a (general) proof composition result twice. (Dinur's alternative proof consists of logarithmically many gap amplification steps, where each step includes an operation akin to proof composition.) A recent work of Amireddy, Behera, Srinivasan, Sudan, and Willumsgaard ({\em ECCC}, TR25-165) presents a new PCP system such that composing the RM-based PCP with the new PCP yields a new proof of the PCP Theorem. Essentially, for every $\alpha>0$, they present a (direct) PCP of constant query complexity and randomness complexity $n^\alpha$, where $n$ denotes the length of the input. (In contrast, recall that the Hadamard-based PCP has quadratic randomness complexity.) We note that their construction has other merits, beyond the fact that it does not use composition. Here we present a different PCP system of constant query complexity and randomness $n^\alpha$. Essentially, we use the RM-based PCP, but with a constant number of dimensions and a large alphabet (equiv., $O(1/\alpha)$-variate polynomials over a field of size $n^{O(\alpha)}$). We then encode the field elements by the Hadamard code and incorporate a tester akin to the verifier used in the Hadamard-based PCP. Whether or not this counts as composition is debatable (and this is reflected in the current title), but for sure this is a non-generic composition that does not involve a preparatory parallelization step.

Test of Time Awards: A Good Idea but ....

from Computational Complexity

Since there is now a CCC Test-of-Time Award, see here,  (CCC stands for Computational Complexity Conference), I decided to look at other Test-of-Time awards in computer science.
Below is a list of various computer science Test-of-Time awards, along with their eligibility requirements.
1) IEEE Visualization (VIS) Test of Time Award, see here.   VIS=Visualization.  Their 2023 award page says:
Papers are selected for each of the three historic conferences (VAST, InfoVis, and SciVis). ...  This year VAST gave out a 10-year test of time award, InfoVis a 10- and 20-year award, and SciVis a 13, 14 and 25 year award.
2) SC Test of Time Award, see here. SC=Supercomputing Conference.  Their site says:
Papers appearing in the SC Program 10 to 25 years prior are eligible.
3) CIKM Test of Time Award, see here.   CIKM=Conf. on Information and Knowledge Management.  Their page states:
The Test of Time Award is given for CIKM papers published at least ten years ago.
3) ACM SIGCSE Test of Time Award, see here. SIGCSE = Special Interest Group on Computer Science Education.  ACM=Association for Computing Machinery. The name is from the era when computers looked like angry filing cabinets.  The ACM SIGCSE Test-of-Time page states:
The award recognizes an outstanding paper published in the SIGCSE community. 
 Note that this is broader than just their conference. Good!
4) LICS Test of Time Award, see here. LICS=Logic in Computer Science.  Their award page says:
The LICS Test-of-Time Award recognizes a small number of papers from the LICS proceedings from 20 years prior.
5) STOC Test of Time Award, see  here. STOC = Symposium on Theory of Computing.  Their page states:
There are three awards---papers presented at the STOC conferences in 10, 20, or 30 years ago [they later say there is some flexibility on that]. 
6) FOCS Test of Time Award, see  here. FOCS stands for Foundations of Computer Science.  Their page states:
The awards recognize papers published in the Proceedings of the Annual IEEE Symposium on FOCS. [From elsewhere on the page they have three categories: papers 10-years ago, 20-years-ago, and 30-years ago, but they have some flexibility with that.]
7) SIGecom Test of Time Award, see here. SIGecom = Special Interest Group---Economics and Computation.  Their page states:
...must have been first published in an archival journal or conference proceedings 10–25 years before. At least one author must not be dead. [A surprisingly nontrivial constraint in a Test-of-Time award.]
I think this is the only award on this page that stated the not-dead criteria explicitly.  Are people in Economics and Computation publishing papers into their 90's? While in hospice care?
8) NeurIPS Test of Time Award, see here.  NeurIPS = Neural Information Processing Systems.  I couldn’t find an eligibility description, so I asked ChatGPT, which confidently stated:
Yes—the paper must have been published in NeurIPS at least 10 years ago. 
If this is wrong, please let me know. ChatGPT is sometimes confident the way undergraduates are confident before seeing the homework solution.
9) CCC Test of Time Award, see here. CCC=Computational Complexity Conference. Their page states:
Eligible papers are those that appeared in CCC (or Structures, pre-1996) at least 10 years ago. 
------------------------------------------------------------------------ These awards raise some questions:
1) Why tie eligibility to a conference?  Suppose Abisola proves P ≠ NP and publishes it in Annals of Mathematics.  Under many of these rules, she would be ineligible for most theory Test-of-Time awards.
CON: Important results may not appear in the “right” conference or any conference at all.
PRO: None. Absolutely none.  Why is it done?  To publicize the conference and provide a nice ceremony venue.  These are not strong intellectual reasons. (I had a nastier comment about this but ChatGPT convinced me to be more polite.)
My proofreader brought up the following:
a) Who will give out the award? Answer: An organization like the ACM, and it DOES NOT need to go to an ACM conference.
b) How to limit what papers are eligible? Any paper that was in a conference or journal (though see item 3 below). That seems like a rather large set of papers to consider; however, after 10 years we DO know which papers stood the test of time. As an example- when Lance made his best theorems of the decade lists he did not pay attention to which venue the result appeared in.
2) Why the 10–25 years ago window?  I understand waiting 10 years—time needs time.  But why exclude older papers?  What if the paper Some Connections Between Bounded Query Classes and Nonuniform Complexity (STRUCTURES/CCC 1990) suddenly becomes important? Too bad: it aged out, like a carton of milk.
Are there examples of papers that became important many years after they were published?
I have one sort-of example: Google AI Overview says
The permanent of a matrix was first introduced independently by Cauchy in 1812 and later independently rediscovered by Binet in 1812. [The identical years makes me suspect, and also makes the notion of ``later'' rather odd- Did Cauchy do it in February and Binet in June?]
The permanent became much more important after Valiant, in 1979, showed that it was Sharp-P-Complete. So perhaps Cauchy's paper should get a test-of-time award.
Fun Fact: The Wikipedia entry on The Permanent (see here) does not mention Valiant, though there is a separate Wikipedia entry on Computing the Permanent (see here) that does.
3) Why require publication in a journal or conference at all?  Suppose Perelman proves P ≠ NP, posts it on arXiv, and never submits it anywhere.  Perelman did just that with his proof of the Poincare Conjecture, and that was good enough for a Fields Medal.
This touches on the much broader---and increasingly relevant—question: What is the future role of journals and conferences in the age of arXiv?
4) (Bonus question): Is there any real difference between the STOC conference and the FOCS conference in terms of the scope of the papers?  Was there ever?  I would guess no and no. Maybe some of my older readers can tell me, unless they are too busy writing papers in Economics and Computation. 
5) Another proofreader pointed out that it would be a good idea to live up to the title of this post, Test of Time Awards: A Good Idea but, and say why they are a good idea instead of bitching and moaning about eligibility. Good Idea! Some thoughts:
a) They might encourage researchrs to aim for deep contributions, not just fashionable ones. I doubt this is true since I doubt authors think about which award they might win.
b)  They reward long-time impact over short-term excitement. Is it much of a reward? How do they benefit the recipient? I ask non-rhetorically. 
c) They are an objective record of subjective opinion.  Useful for historians!  

By gasarch

Since there is now a CCC Test-of-Time Award, see here,  (CCC stands for Computational Complexity Conference), I decided to look at other Test-of-Time awards in computer science.

Below is a list of various computer science Test-of-Time awards, along with their
eligibility requirements.

1) IEEE Visualization (VIS) Test of Time Award, see here.   VIS=Visualization.  Their 2023 award page says:

Papers are selected for each of the three historic conferences (VAST, InfoVis, and SciVis). ... 
This year VAST gave out a 10-year test of time award, InfoVis a 10- and 20-year award, and SciVis a 13, 14 and 25 year award.

2) SC Test of Time Award, see here. SC=Supercomputing Conference.  Their site says:

Papers appearing in the SC Program 10 to 25 years prior are eligible.

3) CIKM Test of Time Award, see here.   CIKM=Conf. on Information and Knowledge Management.  Their page states:

The Test of Time Award is given for CIKM papers published at least ten years ago.

3) ACM SIGCSE Test of Time Award, see here. SIGCSE = Special Interest Group on Computer Science Education.  ACM=Association for Computing Machinery. The name is from the era when computers looked like angry filing cabinets.  The ACM SIGCSE Test-of-Time page states:

The award recognizes an outstanding paper published in the SIGCSE community. 

 Note that this is broader than just their conference. Good!

4) LICS Test of Time Award, see here. LICS=Logic in Computer Science.  Their award page says:

The LICS Test-of-Time Award recognizes a small number of papers from the LICS proceedings from 20 years prior.

5) STOC Test of Time Award, see  here. STOC = Symposium on Theory of Computing.  Their page states:

There are three awards---papers presented at the STOC conferences in 10, 20, or 30 years ago [they later say there is some flexibility on that]. 

6) FOCS Test of Time Award, see  here. FOCS stands for Foundations of Computer Science.  Their page states:

The awards recognize papers published in the Proceedings of the Annual IEEE Symposium on FOCS. [From elsewhere on the page they have three categories: papers 10-years ago, 20-years-ago, and 30-years ago, but they have some flexibility with that.]

7) SIGecom Test of Time Award, see here. SIGecom = Special Interest Group---Economics and Computation.  Their page states:

...must have been first published in an archival journal or conference proceedings 10–25 years before. At least one author must not be dead. [A surprisingly nontrivial constraint in a Test-of-Time award.]

I think this is the only award on this page that stated the not-dead criteria explicitly.  Are people in Economics and Computation publishing papers into their 90's? While in hospice care?

8) NeurIPS Test of Time Award, see here.  NeurIPS = Neural Information Processing Systems.  I couldn’t find an eligibility description, so I asked ChatGPT, which confidently stated:

Yes—the paper must have been published in NeurIPS at least 10 years ago. 

If this is wrong, please let me know. ChatGPT is sometimes confident the way undergraduates are confident before seeing the homework solution.

9) CCC Test of Time Award, see here. CCC=Computational Complexity Conference. Their page states:

Eligible papers are those that appeared in CCC (or Structures, pre-1996) at least 10 years ago. 

------------------------------------------------------------------------
These awards raise some questions:

1) Why tie eligibility to a conference?  Suppose Abisola proves P ≠ NP and publishes it in Annals of Mathematics.  Under many of these rules, she would be ineligible for most theory Test-of-Time awards.

CON: Important results may not appear in the “right” conference or any conference at all.

PRO: None. Absolutely none.  Why is it done?  To publicize the conference and provide a nice ceremony venue.  These are not strong intellectual reasons. (I had a nastier comment about this but ChatGPT convinced me to be more polite.)

My proofreader brought up the following:

a) Who will give out the award? Answer: An organization like the ACM, and it DOES NOT need to go to an ACM conference.

b) How to limit what papers are eligible? Any paper that was in a conference or journal (though see item 3 below). That seems like a rather large set of papers to consider; however, after 10 years we DO know which papers stood the test of time. As an example- when Lance made his best theorems of the decade lists he did not pay attention to which venue the result appeared in.

2) Why the 10–25 years ago window?  I understand waiting 10 years—time needs time.  But why exclude older papers?  What if the paper Some Connections Between Bounded Query Classes and Nonuniform Complexity (STRUCTURES/CCC 1990) suddenly becomes important? Too bad: it aged out, like a carton of milk.

Are there examples of papers that became important many years after they were published?

I have one sort-of example: Google AI Overview says

The permanent of a matrix was first introduced independently by Cauchy in 1812 and later independently rediscovered by Binet in 1812. [The identical years makes me suspect, and also makes the notion of ``later'' rather odd- Did Cauchy do it in February and Binet in June?]

The permanent became much more important after Valiant, in 1979, showed that it was Sharp-P-Complete. So perhaps Cauchy's paper should get a test-of-time award.

Fun Fact: The Wikipedia entry on The Permanent (see here) does not mention Valiant, though there is a separate Wikipedia entry on Computing the Permanent (see here) that does.

3) Why require publication in a journal or conference at all?  Suppose Perelman proves P ≠ NP, posts it on arXiv, and never submits it anywhere.  Perelman did just that with his proof of the Poincare Conjecture, and that was good enough for a Fields Medal.

This touches on the much broader---and increasingly relevant—question: What is the future role of journals and conferences in the age of arXiv?

4) (Bonus question): Is there any real difference between the STOC conference and the FOCS conference in terms of the scope of the papers?  Was there ever?  I would guess no and no. Maybe some of my older readers can tell me, unless they are too busy writing papers in Economics and Computation. 

5) Another proofreader pointed out that it would be a good idea to live up to the title of this post, Test of Time Awards: A Good Idea but, and say why they are a good idea instead of bitching and moaning about eligibility. Good Idea! Some thoughts:

a) They might encourage researchrs to aim for deep contributions, not just fashionable ones. I doubt this is true since I doubt authors think about which award they might win.

b)  They reward long-time impact over short-term excitement. Is it much of a reward? How do they benefit the recipient? I ask non-rhetorically. 

c) They are an objective record of subjective opinion.  Useful for historians!
 

By gasarch

TR25-181 | On Cryptography and Distribution Verification, with Applications to Quantum Advantage | Bruno Pasqualotto Cavalar, Eli Goldin, Matthew Gray, Taiga Hiroka, Tomoyuki Morimae

from ECCC Papers

One of the most fundamental problems in the field of hypothesis testing is the identity testing problem: whether samples from some unknown distribution $\mathcal{G}$ are actually from some explicit distribution $\mathcal{D}$. It is known that when the distribution $\mathcal{D}$ has support $[N]$, the optimal sample complexity for the identity testing problem is roughly $O(\sqrt{N})$. However, many distributions of interest, including those which can be sampled efficiently, have exponential support size, and therefore the optimal identity tester also requires exponential samples. In this paper, we bypass this lower bound by considering restricted settings. The above $O(\sqrt{N})$ sample complexity identity tester is constructed so that it is not fooled by any (even inefficiently-sampled) distributions. However, in most applications, the distributions under consideration are efficiently samplable, and therefore it is enough to consider only identity testers that are not fooled by efficiently-sampled distributions. In this setting we can hope to construct efficient identity testers. We investigate relations between efficient verification of classical/quantum distributions with classical/quantum cryptography, showing the following results: -Classically efficiently samplable distributions are verifiable if and only if one-way functions do not exist. -Quantumly efficiently samplable distributions are verifiable by $\mathbf{P}^\mathbf{PP}$ with a polynomial number of samples. -Sampling-based quantum advantage can be verified quantumly (with a polynomial number of samples) if one-way puzzles do not exist. -If QEFID pairs exist, then some quantumly efficiently samplable distributions are not verifiable. To obtain these results, we introduce a quantum variant of time-bounded Kolmogorov complexity, and show a coding theorem and incompressibility for it. These technical contributions may be of independent interest.

One of the most fundamental problems in the field of hypothesis testing is the identity testing problem: whether samples from some unknown distribution $\mathcal{G}$ are actually from some explicit distribution $\mathcal{D}$. It is known that when the distribution $\mathcal{D}$ has support $[N]$, the optimal sample complexity for the identity testing problem is roughly $O(\sqrt{N})$. However, many distributions of interest, including those which can be sampled efficiently, have exponential support size, and therefore the optimal identity tester also requires exponential samples. In this paper, we bypass this lower bound by considering restricted settings. The above $O(\sqrt{N})$ sample complexity identity tester is constructed so that it is not fooled by any (even inefficiently-sampled) distributions. However, in most applications, the distributions under consideration are efficiently samplable, and therefore it is enough to consider only identity testers that are not fooled by efficiently-sampled distributions. In this setting we can hope to construct efficient identity testers. We investigate relations between efficient verification of classical/quantum distributions with classical/quantum cryptography, showing the following results: -Classically efficiently samplable distributions are verifiable if and only if one-way functions do not exist. -Quantumly efficiently samplable distributions are verifiable by $\mathbf{P}^\mathbf{PP}$ with a polynomial number of samples. -Sampling-based quantum advantage can be verified quantumly (with a polynomial number of samples) if one-way puzzles do not exist. -If QEFID pairs exist, then some quantumly efficiently samplable distributions are not verifiable. To obtain these results, we introduce a quantum variant of time-bounded Kolmogorov complexity, and show a coding theorem and incompressibility for it. These technical contributions may be of independent interest.

TR25-180 | Low-soundness direct-product testers and PCPs from Kaufman--Oppenheim complexes | Ryan O'Donnell, Noah Singer

from ECCC Papers

We study the Kaufman--Oppenheim coset complexes (STOC 2018, Eur. J. Comb. 2023), which have an elementary and strongly explicit description. Answering an open question of Kaufman, Oppenheim, and Weinberger (STOC 2025), we show that they support sparse direct-product testers in the low soundness regime. Our proof relies on the HDX characterization of agreement testing by Bafna--Minzer and Dikstein--Dinur (both STOC 2024), the recent result of Kaufman et al., and follows techniques from Bafna--Lifshitz--Minzer and Dikstein--Dinur--Lubotzky (both FOCS 2024). Ultimately, the task reduces to showing dimension-independent coboundary expansion of certain $2$-dimensional subcomplexes of the KO complex; following the ``Dehn method'' of Kaufman and Oppenheim (ICALP 2021), we do this by establishing efficient presentation bounds for certain matrix groups over polynomial rings. As shown by Bafna, Minzer, and Vyas (STOC 2025), a consequence of our direct-product testing result is that the Kaufman--Oppenheim complexes can also be used to obtain PCPs with arbitrarily small constant soundness and quasilinear length. Thus the use of sophisticated number theory and algebraic group-theoretic tools in the construction of these PCPs can be avoided.

We study the Kaufman--Oppenheim coset complexes (STOC 2018, Eur. J. Comb. 2023), which have an elementary and strongly explicit description. Answering an open question of Kaufman, Oppenheim, and Weinberger (STOC 2025), we show that they support sparse direct-product testers in the low soundness regime. Our proof relies on the HDX characterization of agreement testing by Bafna--Minzer and Dikstein--Dinur (both STOC 2024), the recent result of Kaufman et al., and follows techniques from Bafna--Lifshitz--Minzer and Dikstein--Dinur--Lubotzky (both FOCS 2024). Ultimately, the task reduces to showing dimension-independent coboundary expansion of certain $2$-dimensional subcomplexes of the KO complex; following the ``Dehn method'' of Kaufman and Oppenheim (ICALP 2021), we do this by establishing efficient presentation bounds for certain matrix groups over polynomial rings. As shown by Bafna, Minzer, and Vyas (STOC 2025), a consequence of our direct-product testing result is that the Kaufman--Oppenheim complexes can also be used to obtain PCPs with arbitrarily small constant soundness and quasilinear length. Thus the use of sophisticated number theory and algebraic group-theoretic tools in the construction of these PCPs can be avoided.

TR25-179 | Wide Replacement Products Meet Gray Codes: Toward Optimal Small-Bias Sets | Gil Cohen, Itay Cohen

from ECCC Papers

Optimal small-bias sets sit at the crossroads of coding theory and pseudorandomness. Reaching optimal parameters would, in particular, meet the long-standing goal of matching the Gilbert-Varshamov bound for binary codes in the high-distance regime. In a breakthrough, Ta-Shma (STOC 2017) constructed near-optimal small-bias sets via the Rozenman-Wigderson expander-walk framework, using the wide-replacement product to maintain $s$ "secure" registers and to route the walk through them. Within this framework, two barriers remain en route to optimal small-bias sets: (i) the cost of maintaining registers and (ii) limitations inherited from spectral-gap bounds for expanders. We overcome the first—arguably the more critical—barrier. Our key technical insight is that registers can be reused even after they are exposed. Using a Gray–code–style reuse schedule, we recycle the same $s$ registers exponentially many times in $s$, thereby reducing the register–maintenance cost exponentially. This yields the first improvement over Ta-Shma's construction in nearly a decade---quantitatively modest but an important first step toward a truly optimal construction. The remaining barrier is fairly standard in isolation; the challenge is to overcome it in concert with our register-reuse framework.

Optimal small-bias sets sit at the crossroads of coding theory and pseudorandomness. Reaching optimal parameters would, in particular, meet the long-standing goal of matching the Gilbert-Varshamov bound for binary codes in the high-distance regime. In a breakthrough, Ta-Shma (STOC 2017) constructed near-optimal small-bias sets via the Rozenman-Wigderson expander-walk framework, using the wide-replacement product to maintain $s$ "secure" registers and to route the walk through them. Within this framework, two barriers remain en route to optimal small-bias sets: (i) the cost of maintaining registers and (ii) limitations inherited from spectral-gap bounds for expanders. We overcome the first—arguably the more critical—barrier. Our key technical insight is that registers can be reused even after they are exposed. Using a Gray–code–style reuse schedule, we recycle the same $s$ registers exponentially many times in $s$, thereby reducing the register–maintenance cost exponentially. This yields the first improvement over Ta-Shma's construction in nearly a decade---quantitatively modest but an important first step toward a truly optimal construction. The remaining barrier is fairly standard in isolation; the challenge is to overcome it in concert with our register-reuse framework.

Saturday, November 15

Linkage

from David Eppstein

Lattice path matroids (\(\mathbb{M}\)). An interesting generalization of the uniform matroids with which I was previously unfamiliar. They are defined by two monotone lattice paths in a rectangular grid, one below the other. The bases are lattice paths between the two defining paths, described by the set of positions at which they step upwards.

By David Eppstein

Assistant Professor at Bocconi University (apply by December 16, 2025)

from CCI: jobs

The Department of Computing Sciences at Bocconi University invites applications for a Tenure Track Assistant Professor position. The call is open to all subfields of CS, but theoretical computer science is one of the focus areas for the call. Refer to the link for more information. Website: jobmarket.unibocconi.eu/?id=870 Email: giulio.malavolta@unibocconi.it

The Department of Computing Sciences at Bocconi University invites applications for a Tenure Track Assistant Professor position.

The call is open to all subfields of CS, but theoretical computer science is one of the focus areas for the call. Refer to the link for more information.

Website: https://jobmarket.unibocconi.eu/?id=870
Email: giulio.malavolta@unibocconi.it

By shacharlovett

From Benign Simplex to Byzantine Simplex

from Decentralized Thoughts

In this post we present a Simplex protocol that solves single shot consensus and is resilient to $f < n/3$ Byzantine failures, under partial synchrony. In a previous post we showed Benign Simplex which is resilient to $f<n/2$ omission failures in the same setting. Moving from omission failures to Byzantine...

In this post we present a Simplex protocol that solves single shot consensus and is resilient to $f < n/3$ Byzantine failures, under partial synchrony. In a previous post we showed Benign Simplex which is resilient to $f<n/2$ omission failures in the same setting. Moving from omission failures to Byzantine...

Friday, November 14

Postdoc in Computational Logic at Tampere University (apply by December 4, 2025)

from CCI: jobs

We invite applications for a postdoctoral position in computational logic at Tampere University, Finland. The role involves research on theoretical aspects of computational logic, with flexibility to align projects to the candidate’s interests. Applicants should have a background in mathematics, computer science, or a related field. Website: tuni.rekrytointi.com/paikat/?o=A_RJ&jgid=1&jid=2857 Email: antti.kuusisto@tuni.fi

We invite applications for a postdoctoral position in computational logic at Tampere University, Finland. The role involves research on theoretical aspects of computational logic, with flexibility to align projects to the candidate’s interests. Applicants should have a background in mathematics, computer science, or a related field.

Website: https://tuni.rekrytointi.com/paikat/?o=A_RJ&jgid=1&jid=2857
Email: antti.kuusisto@tuni.fi

By shacharlovett

TR25-178 | A Logspace Constructive Proof of L=SL | Valentine Kabanets, Sam Buss, Anant Dhayal, Antonina Kolokolova, Sasank Mouli

from ECCC Papers

We formalize the proof of Reingold's Theorem that SL=L (STOC'05) in the theory of bounded arithmetic VL, which corresponds to ``logspace reasoning''. As a consequence, we get that VL=VSL, where VSL is the theory of bounded arithmetic for ``symmetric-logspace reasoning''. This resolves in the affirmative an old open question from Kolokolova's PhD thesis (2005) (see also Cook and Nguyen (2010)). Our proof relies on the Rozenman-Vadhan alternative proof of Reingold's Theorem (RANDOM'05). To formalize this proof in VL, we need to avoid reasoning about eigenvalues and eigenvectors (common in both original proofs of SL=L). We achieve this by using some results from Buss-Kabanets-Kolokolova-Kouck\'y (APAL, 2020) that allow VL to reason about graph expansion in combinatorial terms.

We formalize the proof of Reingold's Theorem that SL=L (STOC'05) in the theory of bounded arithmetic VL, which corresponds to ``logspace reasoning''. As a consequence, we get that VL=VSL, where VSL is the theory of bounded arithmetic for ``symmetric-logspace reasoning''. This resolves in the affirmative an old open question from Kolokolova's PhD thesis (2005) (see also Cook and Nguyen (2010)). Our proof relies on the Rozenman-Vadhan alternative proof of Reingold's Theorem (RANDOM'05). To formalize this proof in VL, we need to avoid reasoning about eigenvalues and eigenvectors (common in both original proofs of SL=L). We achieve this by using some results from Buss-Kabanets-Kolokolova-Kouck\'y (APAL, 2020) that allow VL to reason about graph expansion in combinatorial terms.

Sampling Permutations is Hard

from Emanuele Viola

Today I woke up a little earlier than usual, and now I know why: Yaroslav Alekseev, Mika Göös, Konstantin Myasnikov, Artur Riazanov, Dmitry Sokolov have just posted what looks like an amazing paper. They prove a sampling lower bound for permutations, something which had resisted many attacks. I suspect their result introduces new techniques in […]

Today I woke up a little earlier than usual, and now I know why: Yaroslav Alekseev, Mika Göös, Konstantin Myasnikov, Artur Riazanov, Dmitry Sokolov have just posted what looks like an amazing paper. They prove a sampling lower bound for permutations, something which had resisted many attacks. I suspect their result introduces new techniques in the area which will find more applications. In a nutshell, a uniform permutation locally looks like a uniform function. Because a uniform function is easy to sample, it is hard to prove a lower bound. We did have lower bounds for *non-adaptive* samplers of permutations, and also lower bounds for *adaptive* samplers of other functions (all this is discussed in their paper), but due to the proximity to a uniform function, the techniques broke down for adaptive samplers of permutations. My reading list keeps growing…

By Manu

TR25-177 | Sampling Permutations with Cell Probes is Hard | Artur Riazanov, Yaroslav Alekseev, Mika Göös, Konstantin Myasnikov, Dmitry Sokolov

from ECCC Papers

Suppose we are given an infinite sequence of input cells, each initialized with a uniform random symbol from $[n]$. How hard is it to output a sequence in $[n]^n$ that is close to a uniform random permutation? Viola (SICOMP 2020) conjectured that if each output cell is computed by making $d$ probes to input cells, then $d \ge \omega(1)$. Our main result shows that, in fact, $d \ge (\log n)^{\Omega(1)}$, which is tight up to the constant in the exponent. Our techniques also show that if the probes are nonadaptive, then $d\ge n^{\Omega(1)}$, which is an exponential improvement over the previous nonadaptive lower bound due to Yu and Zhan (ITCS 2024). Our results also imply lower bounds against succinct data structures for storing permutations.

Suppose we are given an infinite sequence of input cells, each initialized with a uniform random symbol from $[n]$. How hard is it to output a sequence in $[n]^n$ that is close to a uniform random permutation? Viola (SICOMP 2020) conjectured that if each output cell is computed by making $d$ probes to input cells, then $d \ge \omega(1)$. Our main result shows that, in fact, $d \ge (\log n)^{\Omega(1)}$, which is tight up to the constant in the exponent. Our techniques also show that if the probes are nonadaptive, then $d\ge n^{\Omega(1)}$, which is an exponential improvement over the previous nonadaptive lower bound due to Yu and Zhan (ITCS 2024). Our results also imply lower bounds against succinct data structures for storing permutations.

Faculty at The University of Hong Kong (apply by December 31, 2025)

from CCI: jobs

Applications are invited for appointment as Tenure-track Professor/ Associate Professor/ Assistant Professor (several posts) at the School of Computing and Data Science, The University of Hong Kong. The positions are open to all research areas in Computer Science, Artificial Intelligence, Data Science or a related field. Website: jobs.hku.hk/en/job/533072/tenuretrack-professor-associate-professor-assistant-professor-several-posts-at-the-school-of-computing-and-data-science Email: zhiyi@cs.hku.hk

Applications are invited for appointment as Tenure-track Professor/ Associate Professor/ Assistant Professor (several posts) at the School of Computing and Data Science, The University of Hong Kong. The positions are open to all research areas in Computer Science, Artificial Intelligence, Data Science or a related field.

Website: https://jobs.hku.hk/en/job/533072/tenuretrack-professor-associate-professor-assistant-professor-several-posts-at-the-school-of-computing-and-data-science
Email: zhiyi@cs.hku.hk

By shacharlovett

TCS+ talk: Wednesday, November 19 — Haotian Jiang, U Chicago

from TCS+ Seminar Series

The next TCS+ talk will take place this coming Wednesday, November 19th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Haotian Jiang from U Chicago will speak about “Beck-Fiala and Komlós Bounds Beyond Banaszczyk” (abstract below). You can reserve a spot as an individual or a group to […]

The next TCS+ talk will take place this coming Wednesday, November 19th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Haotian Jiang from U Chicago will speak about “Beck-Fiala and Komlós Bounds Beyond Banaszczyk” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Registration is not required to attend the interactive talk, and the link will be posted on the website the day prior to the talk; however, by registering in the form, you will receive a reminder, along with the link. (The recorded talk will also be posted on our website afterwards) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The Beck-Fiala Conjecture asserts that any set system of n elements with degree k has combinatorial discrepancy O(\sqrt{k}). A substantial generalization is the Komlós Conjecture, which states that any m by n matrix with columns of unit Euclidean length has discrepancy O(1).

In this talk, we describe an \tilde{O}(log^{1/4} n) bound for the Komlós problem, improving upon the O(log^{1/2} n) bound due to Banaszczyk from 1998. We will also see how these ideas can be used to resolve the Beck-Fiala Conjecture for k \geq \log^2 n, and give a \tilde{O}(k^{1/2} + \log^{1/2} n) bound for smaller k, which improves upon Banaszczyk’s O(k^{1/2} \log^{1/2} n) bound. These results are based on a new technique of “Decoupling via Affine Spectral Independence” in designing rounding algorithms, which might also be useful in other contexts.

This talk is based on joint work with Nikhil Bansal (University of Michigan).

By plustcs

Quantum computing: too much to handle!

from Scott Aaronson

Tomorrow I’m headed to Berkeley for the Inkhaven blogging residency, whose participants need to write one blog post per day or get kicked out. I’ll be there to share my “wisdom” as a distinguished elder blogger (note that Shtetl-Optimized is now in its twentieth year). I’m acutely aware of the irony, that I myself can […]

Tomorrow I’m headed to Berkeley for the Inkhaven blogging residency, whose participants need to write one blog post per day or get kicked out. I’ll be there to share my “wisdom” as a distinguished elder blogger (note that Shtetl-Optimized is now in its twentieth year). I’m acutely aware of the irony, that I myself can barely muster the willpower these days to put up a post every other week.

And it’s not as if nothing is happening in this blog’s traditional stomping-ground of quantum computing! In fact, the issue is just the opposite: way too much is happening for me to do it any sort of justice. Who do people think I am, Zvi Mowshowitz? The mere thought of being comprehensive, of responsibly staying on top of all the latest QC developments, makes me want to curl up in bed, and either scroll through political Substacks or take a nap.


But then, you know, eventually a post gets written. Let me give you some vignettes about what’s new in QC, any one of which could easily have been its own post if I were twenty years younger.

(1) Google announced verifiable quantum advantage based on Out-of-Time-Order-Correlators (OTOC)—this is actually from back in June, but it’s gotten more and more attention as Google has explained it more thoroughly. See especially this recent 2-page note by King, Kothari, et al., explaining Google’s experiment in theoretical computer science language. Basically, what they do is, starting from the all-|0⟩ state, to apply a random circuit C, then a single gate g, then C-1, then another gate h, then C again, then g again, then C-1, and then measure a qubit. If C is shallow, then the qubit is likely to still be |0⟩. If C is too deep, then the qubit is likely to be in the maximally mixed state, totally uncorrelated with its initial state—the gates g and h having caused a “butterfly effect” that completely ruined all the cancellation between C and C-1. Google claims that, empirically, there’s an intermediate regime where the qubit is neither |0⟩ nor the maximally mixed state, but a third thing—and that this third thing seems hard to determine classically, using tensor network algorithms or anything else they’ve thrown at it, but it can of course be determined by running the quantum computer. Crucially, because we’re just trying to estimate a few parameters here, rather than sample from a probability distribution (as with previous quantum supremacy experiments), the output can be checked by comparing it against the output of a second quantum computer, even though the problem still isn’t in NP. Incidentally, if you’re wondering why they go back and forth between C and C-1 multiple times rather than just once, it’s to be extra confident that there’s not a fast classical simulation. Of course there might turn out to be a fast classical simulation anyway, but if so, it will require a new idea: gauntlet thrown.

(2) Quantinuum, the trapped-ion QC startup in Colorado, announced its Helios processor. Quick summary of the specs: 98 qubits, all-to-all 2-qubit gates with 99.92% fidelity, the ability to choose which gates to apply “just in time” (rather than fixing the whole circuit in advance, as was needed with their previous API), and an “X”-shaped junction for routing qubits one way or the other (the sort of thing that a scalable trapped-ion quantum computer will need many of). This will enable, and is already enabling, more and better demonstrations of quantum advantage.

(3) Quantinuum and JP Morgan Chase announced the demonstration of a substantially improved version of my and Shih-Han-Hung’s protocol for generating cryptographically certified random bits, using quantum supremacy experiments based on random circuit sampling. They did their demo on Quantinuum’s new Helios processor. Compared to the previous demonstration, the new innovation is to send the circuit to the quantum computer one layer at a time, rather than all at once (something that, again, Quantinuum’s new API allows). The idea is that a cheating server, who wanted to spoof the randomness deterministically, now has much less time: using the most competitive known methods (e.g., those based on tensor network contraction), it seems the cheater would need to swing into action only after learning the final layer of gates, so would now have mere milliseconds to spoof rather than seconds, making Internet latency the dominant source of spoofing time in practice. While a complexity-theoretic analysis of the new protocol (or, in general, of “layer-by-layer” quantum supremacy protocols like it) is still lacking, I like the idea a lot.

(4) The startup company BlueQubit announced a candidate demonstration of verifiable quantum supremacy via obfuscated peaked random circuits, again on a Quantinuum trapped-ion processor (though not Helios). In so doing, BlueQubit is following the program that Yuxuan Zhang and I laid out last year: namely, generate a quantum circuit C that hopefully looks random to any efficient classical algorithm, but that conceals a secret high-probability output string x, which pops out if you run C on a quantum computer on the all-0 initial state. To try to hide x, BlueQubit uses at least three different circuit obfuscation techniques, which already tells you that they can’t have complete confidence in any one of them (since if they did, why the other two?). Nevertheless, I’m satisfied that they tried hard to break their own obfuscation, and failed. Now it’s other people’s turn to try.

(5) Deshpande, Fefferman, et al. announced a different theoretical proposal for quantum advantage from peaked quantum circuits, based on error-correcting codes. This seems tempting to try to demonstrate along the way to quantum fault-tolerance.

(6) A big one: John Bostanci, Jonas Haferkamp, Chinmay Nirkhe, and Mark Zhandry announced a proof of a classical oracle separation between the complexity classes QMA and QCMA, something that they’ve been working on for well over a year. Their candidate problem is basically a QMA-ified version of my Forrelation, which Raz and Tal previously used to achieve an oracle separation between BQP and PH. I caution that their paper is 91 pages long and hasn’t yet been vetted by independent experts, and there have been serious failed attempts on this exact problem in this past. If this stands, however, it finally settles a problem that’s been open since 2002 (and which I’ve worked on at various points starting in 2002), and shows a strong sense in which quantum proofs are more powerful than classical proofs. Note that in 2006, Greg Kuperberg and I gave a quantum oracle separation between QMA and QCMA—introducing the concept of quantum oracles for the specific purpose of that result—and since then, there’s been progress on making the oracle steadily “more classical,” but the oracle was always still randomized or “in-place” or had restrictions on how it could be queried.

(7) Oxford Ionics (which is now owned by IonQ) announced a 2-qubit gate with 99.99% fidelity: a record, and significantly past the threshold for quantum fault-tolerance. However, as far as I know, it remains to demonstrate this sort of fidelity in a large programmable system with dozens of qubits and hundreds of gates.

(8) Semi-announcement: Quanta reports that “Physicists Take the Imaginary Numbers Out of Quantum Mechanics,” and this seems to have gone viral on my social media. The article misses the opportunity to explain that “taking the imaginary numbers out” is as trivial as choosing to call each complex amplitude “just an ordered pair of reals, obeying such-and-such rules, which happen to mimic the rules for complex numbers.” Thus, the only interesting question here is whether one can take imaginary numbers out of QM in various more-or-less “natural” ways: a technical debate that the recent papers are pushing forward. For what it’s worth, I don’t expect that anything coming out of this line of work will ever be “natural” enough for me to stop explaining QM in terms of complex numbers in my undergraduate class, for example.

(9) The list of accepted talks for the annual QIP conference, to be held January 24-30 in Riga, Latvia, is now out. Lots of great stuff as always.

(10) There are probably other major recent developments in QC that I should’ve put into this post but forgot about. You can remind me about them in the comments.

(11) Indeed there are! I completely forgot that Phasecraft announced two simulations of fermionic systems that might achieve quantum advantage, one using Google’s Willow superconducting chip and the other using a Quantinuum device.


To summarize three takeaways:

  • Evidence continues to pile up that we are not living in the universe of Gil Kalai and the other quantum computing skeptics. Indeed, given the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!
  • OK, but what will those quantum computers be useful for? Anyone who’s been reading this blog for the past 20 years, or any non-negligible fraction thereof, hopefully already has a calibrated sense of that, so I won’t belabor. But briefly: yes, our knowledge of useful quantum algorithms has slowly been expanding over the past thirty years. The central difficulty is that our knowledge of useful classical algorithms has also been expanding, and the only thing that matters is the differential between the two! I’d say that the two biggest known application areas for QC remain (a) quantum simulation and (b) the breaking of public-key cryptography, just as they were thirty years ago. In any case, none of the exciting developments that I’ve chosen to highlight in this post directly address the “what is it good for?” question, with the exception of the certified randomness thing.
  • In talks over the past three years, I’ve advocated “verifiable quantum supremacy on current hardware” as perhaps the central challenge right now for quantum computing theory. (As I love to point out, we do know how to achieve any two of (a) quantum supremacy that’s (b) verifiable and (c) runs on current hardware!) So I’m gratified that three of the recent developments that I chose to highlight, namely (1), (4), and (5), directly address this challenge. Of course, we’re not yet sure whether any of these three attempts will stand—that is, whether they’ll resist all attempts to simulate them classically. But the more serious shots on goal we have (and all three of these are quite serious), the better the chances that at least one will stand! So I’m glad that people are sticking their necks out, proposing these things, and honestly communicating what they know and don’t know about them: this is exactly what I’d hoped would happen. Of course, complexity-theoretic analysis of these proposals would also be great, perhaps from people with more youth and/or energy than me. Now it’s time for me to sleep.

By Scott

Low-soundness direct-product testers and PCPs from Kaufman--Oppenheim complexes

from arXiv: Computational Complexity

Authors: Ryan O'Donnell, Noah G. Singer

We study the Kaufman--Oppenheim coset complexes (STOC 2018, Eur. J. Comb. 2023), which have an elementary and strongly explicit description. Answering an open question of Kaufman, Oppenheim, and Weinberger (STOC 2025), we show that they support sparse direct-product testers in the low soundness regime. Our proof relies on the HDX characterization of agreement testing by Bafna--Minzer and Dikstein--Dinur (both STOC 2024), the recent result of Kaufman. et al, and follows techniques from Bafna--Lifshitz--Minzer and Dikstein--Dinur--Lubotzky (both FOCS 2024). Ultimately, the task reduces to showing dimension-independent coboundary expansion of certain $2$-dimensional subcomplexes of the KO complex; following the ``Dehn method'' of Kaufman and Oppenheim (ICALP 2021), we do this by establishing efficient presentation bounds for certain matrix groups over polynomial rings. As shown by Bafna, Minzer, and Vyas (STOC 2025), a consequence of our direct-product testing result is that the Kaufman--Oppenheim complexes can also be used to obtain PCPs with arbitrarily small constant soundness and quasilinear length. Thus the use of sophisticated number theory and algebraic group-theoretic tools in the construction of these PCPs can be avoided.

Authors: Ryan O'Donnell, Noah G. Singer

We study the Kaufman--Oppenheim coset complexes (STOC 2018, Eur. J. Comb. 2023), which have an elementary and strongly explicit description. Answering an open question of Kaufman, Oppenheim, and Weinberger (STOC 2025), we show that they support sparse direct-product testers in the low soundness regime. Our proof relies on the HDX characterization of agreement testing by Bafna--Minzer and Dikstein--Dinur (both STOC 2024), the recent result of Kaufman. et al, and follows techniques from Bafna--Lifshitz--Minzer and Dikstein--Dinur--Lubotzky (both FOCS 2024). Ultimately, the task reduces to showing dimension-independent coboundary expansion of certain $2$-dimensional subcomplexes of the KO complex; following the ``Dehn method'' of Kaufman and Oppenheim (ICALP 2021), we do this by establishing efficient presentation bounds for certain matrix groups over polynomial rings. As shown by Bafna, Minzer, and Vyas (STOC 2025), a consequence of our direct-product testing result is that the Kaufman--Oppenheim complexes can also be used to obtain PCPs with arbitrarily small constant soundness and quasilinear length. Thus the use of sophisticated number theory and algebraic group-theoretic tools in the construction of these PCPs can be avoided.

A measurement-driven quantum algorithm for SAT: Performance guarantees via spectral gaps and measurement parallelization

from arXiv: Computational Complexity

Authors: Franz J. Schreiber, Maximilian J. Kramer, Alexander Nietner, Jens Eisert

The Boolean satisfiability problem (SAT) is of central importance in both theory and practice. Yet, most provable guarantees for quantum algorithms rely exclusively on Grover-type methods that cap the possible advantage at only quadratic speed-ups, making the search for approaches that surpass this quadratic barrier a key challenge. In this light, this work presents a rigorous worst-case runtime analysis of a recently introduced measurement-driven quantum SAT solver. Importantly, this quantum algorithm does not exclusively rely on Grover-type methods and shows promising numerical performance. Our analysis establishes that the algorithm's runtime depends on an exponential trade-off between two key properties: the spectral gap of the associated Hamiltonian and the success probability of the driving measurements. We show that this trade-off can be systematically controlled by a tunable rotation angle. Beyond establishing a worst-case runtime expression, this work contributes significant algorithmic improvements. First, we develop a new readout routine that efficiently finds a solution even for instances with multiple satisfying assignments. Second, a measurement parallelization scheme, based on perfect hash families, is introduced. Third, we establish an amplitude-amplified version of the measurement-driven algorithm. Finally, we demonstrate the practical utility of our framework: By suitably scheduling the algorithm's parameters, we show that its runtime collapses from exponential to polynomial on a special class of SAT instances, consistent with their known classical tractability. A problem we leave open is to establish a non-trivial lower bound on the spectral gap as a function of the rotation angle. Resolving this directly translates into an improved worst-case runtime, potentially realizing a super-quadratic quantum advantage.

Authors: Franz J. Schreiber, Maximilian J. Kramer, Alexander Nietner, Jens Eisert

The Boolean satisfiability problem (SAT) is of central importance in both theory and practice. Yet, most provable guarantees for quantum algorithms rely exclusively on Grover-type methods that cap the possible advantage at only quadratic speed-ups, making the search for approaches that surpass this quadratic barrier a key challenge. In this light, this work presents a rigorous worst-case runtime analysis of a recently introduced measurement-driven quantum SAT solver. Importantly, this quantum algorithm does not exclusively rely on Grover-type methods and shows promising numerical performance. Our analysis establishes that the algorithm's runtime depends on an exponential trade-off between two key properties: the spectral gap of the associated Hamiltonian and the success probability of the driving measurements. We show that this trade-off can be systematically controlled by a tunable rotation angle. Beyond establishing a worst-case runtime expression, this work contributes significant algorithmic improvements. First, we develop a new readout routine that efficiently finds a solution even for instances with multiple satisfying assignments. Second, a measurement parallelization scheme, based on perfect hash families, is introduced. Third, we establish an amplitude-amplified version of the measurement-driven algorithm. Finally, we demonstrate the practical utility of our framework: By suitably scheduling the algorithm's parameters, we show that its runtime collapses from exponential to polynomial on a special class of SAT instances, consistent with their known classical tractability. A problem we leave open is to establish a non-trivial lower bound on the spectral gap as a function of the rotation angle. Resolving this directly translates into an improved worst-case runtime, potentially realizing a super-quadratic quantum advantage.

Witness Set in Monotone Polygons: Exact and Approximate

from arXiv: Computational Geometry

Authors: Udvas Das, Binayak Dutta, Satyabrata Jana, Debabrata Pal, Sasanka Roy

Given a simple polygon $\mathscr{P}$, two points $x$ and $y$ within $\mathscr{P}$ are {\em visible} to each other if the line segment between $x$ and $y$ is contained in $\mathscr{P}$. The {\em visibility region} of a point $x$ includes all points in $\mathscr{P}$ that are visible from $x$. A point set $Q$ within a polygon $\mathscr{P}$ is said to be a \emph{witness set} for $\mathscr{P}$ if each point in $\mathscr{P}$ is visible from at most one point from $Q$. The problem of finding the largest size witness set in a given polygon was introduced by Amit et al. [Int. J. Comput. Geom. Appl. 2010]. Recently, Daescu et al. [Comput. Geom. 2019] gave a linear-time algorithm for this problem on monotone mountains. In this study, we contribute to this field by obtaining the largest witness set within both continuous and discrete models. In the {\sc Witness Set (WS)} problem, the input is a polygon $\mathscr{P}$, and the goal is to find a maximum-sized witness set in $\mathscr{P}$. In the {\sc Discrete Witness Set (DisWS)} problem, one is given a finite set of points $S$ alongside $\mathscr{P}$, and the task is to find a witness set $Q \subseteq S$ that maximizes $|Q|$. We investigate {\sc DisWS} in simple polygons, but consider {\sc WS} specifically for monotone polygons. Our main contribution is as follows: (1) a polynomial time algorithm for {\sc DisWS} for general polygons and (2) the discretization of the {\sc WS} problem for monotone polygons. Specifically, given a monotone polygon with $r$ reflex vertices, and a positive integer $k$ we generate a point set $Q$ with size $r^{O(k)} \cdot n$ such that $Q$ contains an witness set of size $k$ (if exists). This leads to an exact algorithm for {\sc WS} problem in monotone polygons running in time $r^{O(k)} \cdot n^{O(1)}$. We also provide a PTAS for this with running time $r^{O(1/ε)} n^2$.

Authors: Udvas Das, Binayak Dutta, Satyabrata Jana, Debabrata Pal, Sasanka Roy

Given a simple polygon $\mathscr{P}$, two points $x$ and $y$ within $\mathscr{P}$ are {\em visible} to each other if the line segment between $x$ and $y$ is contained in $\mathscr{P}$. The {\em visibility region} of a point $x$ includes all points in $\mathscr{P}$ that are visible from $x$. A point set $Q$ within a polygon $\mathscr{P}$ is said to be a \emph{witness set} for $\mathscr{P}$ if each point in $\mathscr{P}$ is visible from at most one point from $Q$. The problem of finding the largest size witness set in a given polygon was introduced by Amit et al. [Int. J. Comput. Geom. Appl. 2010]. Recently, Daescu et al. [Comput. Geom. 2019] gave a linear-time algorithm for this problem on monotone mountains. In this study, we contribute to this field by obtaining the largest witness set within both continuous and discrete models. In the {\sc Witness Set (WS)} problem, the input is a polygon $\mathscr{P}$, and the goal is to find a maximum-sized witness set in $\mathscr{P}$. In the {\sc Discrete Witness Set (DisWS)} problem, one is given a finite set of points $S$ alongside $\mathscr{P}$, and the task is to find a witness set $Q \subseteq S$ that maximizes $|Q|$. We investigate {\sc DisWS} in simple polygons, but consider {\sc WS} specifically for monotone polygons. Our main contribution is as follows: (1) a polynomial time algorithm for {\sc DisWS} for general polygons and (2) the discretization of the {\sc WS} problem for monotone polygons. Specifically, given a monotone polygon with $r$ reflex vertices, and a positive integer $k$ we generate a point set $Q$ with size $r^{O(k)} \cdot n$ such that $Q$ contains an witness set of size $k$ (if exists). This leads to an exact algorithm for {\sc WS} problem in monotone polygons running in time $r^{O(k)} \cdot n^{O(1)}$. We also provide a PTAS for this with running time $r^{O(1/ε)} n^2$.

Is nasty noise actually harder than malicious noise?

from arXiv: Data Structures and Algorithms

Authors: Guy Blanc, Yizhi Huang, Tal Malkin, Rocco A. Servedio

We consider the relative abilities and limitations of computationally efficient algorithms for learning in the presence of noise, under two well-studied and challenging adversarial noise models for learning Boolean functions: malicious noise, in which an adversary can arbitrarily corrupt a random subset of examples given to the learner; and nasty noise, in which an adversary can arbitrarily corrupt an adversarially chosen subset of examples given to the learner. We consider both the distribution-independent and fixed-distribution settings. Our main results highlight a dramatic difference between these two settings: For distribution-independent learning, we prove a strong equivalence between the two noise models: If a class ${\cal C}$ of functions is efficiently learnable in the presence of $η$-rate malicious noise, then it is also efficiently learnable in the presence of $η$-rate nasty noise. In sharp contrast, for the fixed-distribution setting we show an arbitrarily large separation: Under a standard cryptographic assumption, for any arbitrarily large value $r$ there exists a concept class for which there is a ratio of $r$ between the rate $η_{malicious}$ of malicious noise that polynomial-time learning algorithms can tolerate, versus the rate $η_{nasty}$ of nasty noise that such learning algorithms can tolerate. To offset the negative result for the fixed-distribution setting, we define a broad and natural class of algorithms, namely those that ignore contradictory examples (ICE). We show that for these algorithms, malicious noise and nasty noise are equivalent up to a factor of two in the noise rate: Any efficient ICE learner that succeeds with $η$-rate malicious noise can be converted to an efficient learner that succeeds with $η/2$-rate nasty noise. We further show that the above factor of two is necessary, again under a standard cryptographic assumption.

Authors: Guy Blanc, Yizhi Huang, Tal Malkin, Rocco A. Servedio

We consider the relative abilities and limitations of computationally efficient algorithms for learning in the presence of noise, under two well-studied and challenging adversarial noise models for learning Boolean functions: malicious noise, in which an adversary can arbitrarily corrupt a random subset of examples given to the learner; and nasty noise, in which an adversary can arbitrarily corrupt an adversarially chosen subset of examples given to the learner. We consider both the distribution-independent and fixed-distribution settings. Our main results highlight a dramatic difference between these two settings: For distribution-independent learning, we prove a strong equivalence between the two noise models: If a class ${\cal C}$ of functions is efficiently learnable in the presence of $η$-rate malicious noise, then it is also efficiently learnable in the presence of $η$-rate nasty noise. In sharp contrast, for the fixed-distribution setting we show an arbitrarily large separation: Under a standard cryptographic assumption, for any arbitrarily large value $r$ there exists a concept class for which there is a ratio of $r$ between the rate $η_{malicious}$ of malicious noise that polynomial-time learning algorithms can tolerate, versus the rate $η_{nasty}$ of nasty noise that such learning algorithms can tolerate. To offset the negative result for the fixed-distribution setting, we define a broad and natural class of algorithms, namely those that ignore contradictory examples (ICE). We show that for these algorithms, malicious noise and nasty noise are equivalent up to a factor of two in the noise rate: Any efficient ICE learner that succeeds with $η$-rate malicious noise can be converted to an efficient learner that succeeds with $η/2$-rate nasty noise. We further show that the above factor of two is necessary, again under a standard cryptographic assumption.

A Quasi-Polynomial Time Algorithm for 3-Coloring Circle Graphs

from arXiv: Data Structures and Algorithms

Authors: Ajaykrishnan E S, Robert Ganian, Daniel Lokshtanov, Vaishali Surianarayanan

A graph $G$ is a circle graph if it is an intersection graph of chords of a unit circle. We give an algorithm that takes as input an $n$ vertex circle graph $G$, runs in time at most $n^{O(\log n)}$ and finds a proper $3$-coloring of $G$, if one exists. As a consequence we obtain an algorithm with the same running time to determine whether a given ordered graph $(G, \prec)$ has a $3$-page book embedding. This gives a partial resolution to the well known open problem of Dujmović and Wood [Discret. Math. Theor. Comput. Sci. 2004], Eppstein [2014], and Bachmann, Rutter and Stumpf [J. Graph Algorithms Appl. 2024] of whether 3-Coloring on circle graphs admits a polynomial time algorithm.

Authors: Ajaykrishnan E S, Robert Ganian, Daniel Lokshtanov, Vaishali Surianarayanan

A graph $G$ is a circle graph if it is an intersection graph of chords of a unit circle. We give an algorithm that takes as input an $n$ vertex circle graph $G$, runs in time at most $n^{O(\log n)}$ and finds a proper $3$-coloring of $G$, if one exists. As a consequence we obtain an algorithm with the same running time to determine whether a given ordered graph $(G, \prec)$ has a $3$-page book embedding. This gives a partial resolution to the well known open problem of Dujmović and Wood [Discret. Math. Theor. Comput. Sci. 2004], Eppstein [2014], and Bachmann, Rutter and Stumpf [J. Graph Algorithms Appl. 2024] of whether 3-Coloring on circle graphs admits a polynomial time algorithm.