Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Friday, January 16

Czech Summer School on Discrete Mathematics 2026

from CS Theory Events

July 6-10, 2026 Pilsen, Czech Republic www.mff.cuni.cz/en/iuuk/events/czech-summer-school-on-discrete-mathematics Registration deadline: March 1, 2026 Czech (formerly Prague) Summer School on Discrete Mathematics is held every two years since 2016. Currently, it is jointly organized by the Computer Science Institute of Charles University and the Department of Mathematics of University of West Bohemia, and supported by the RSJ … Continue reading Czech Summer School on Discrete Mathematics 2026

By shacharlovett

July 6-10, 2026 Pilsen, Czech Republic https://www.mff.cuni.cz/en/iuuk/events/czech-summer-school-on-discrete-mathematics Registration deadline: March 1, 2026 Czech (formerly Prague) Summer School on Discrete Mathematics is held every two years since 2016. Currently, it is jointly organized by the Computer Science Institute of Charles University and the Department of Mathematics of University of West Bohemia, and supported by the RSJ … Continue reading Czech Summer School on Discrete Mathematics 2026

By shacharlovett

Assistant Professor, Tenure-Track at University of Delaware (apply by February 15, 2026)

from CCI: jobs

The Department of Computer & Information Sciences at the University of Delaware is hiring a tenure-track assistant professor in computer science theory. Theory is broadly construed to include the theory of computation, algorithms, logic, verification, formal methods, programming languages, or any area of computer science with a strong theoretical component. Position is open until filled. […]

The Department of Computer & Information Sciences at the University of Delaware is hiring a tenure-track assistant professor in computer science theory. Theory is broadly construed to include the theory of computation, algorithms, logic, verification, formal methods, programming languages, or any area of computer science with a strong theoretical component. Position is open until filled.

Website: https://careers.udel.edu/en-us/job/502682
Email: siegel@udel.edu

By shacharlovett

EATCS Distinguished Dissertation Award 2025 -- Call for Nominations

from Luca Aceto

It's the time of the year for the standard call for nominations for EATCS Awards. Today, I am reposting the call for nominations for the EATCS Distinguished Dissertation Award 2025. See below and here for details. 

Do spread the news and encourage worthy candidates to submit their theses for this accolade!

-----------------------------

The EATCS establishes the Distinguished Dissertation Award to promote and
recognize outstanding dissertations in the field of Theoretical Computer
Science.

Any PhD dissertation in the field of Theoretical Computer Science that has been
successfully defended in 2025 is eligible.

Up to three dissertations will be selected by the committee for year 2025. The
dissertations will be evaluated on the basis of originality and potential impact
on their respective fields and on Theoretical Computer Science.

Each of the selected dissertations will receive a prize of 1000 Euro. The award
receiving dissertations will be published on the EATCS web site, where all the
EATCS Distinguished Dissertations will be collected.

The dissertation must be submitted by the author as an attachment to an email
message sent to the address dissertation-award@eatcs.org with subject EATCS
Distinguished Dissertation Award 2025 by February 28th, 2026 (CET). The deadline
is strict and late submissions will not be considered.

The body of the message must specify: Name and email address of the candidate;
Title of the dissertation; Department that has awarded the PhD and denomination
of the PhD program; Name and email address of the thesis supervisor; Date of the
successful defence of the thesis.

A five-page abstract of the dissertation and a letter by the thesis supervisor
certifying that the thesis has been successfully defended must also be included.
In addition, an endorsement letter from the thesis supervisor, and possibly one
more endorsement letter, must be sent by the endorsers as attachments to an
email message sent to the address dissertation-award@eatcs.org with subject
EATCS DDA 2025 endorsement. The name of the candidate should be clearly
specified in the message.

The dissertations will be selected by the following committee:

Standa Zivny (chair)
Petra Berenbrink
Loukas Georgiadis
Kasper Green Larsen
Emanuela Merelli

The award committee will solicit the opinion of members of the research
community as appropriate. Dissertations supervised by members of the selection
committee are not eligible. The EATCS is committed to equal opportunities, and
welcomes submissions of outstanding theses from all authors.

By Luca Aceto

It's the time of the year for the standard call for nominations for EATCS Awards. Today, I am reposting the call for nominations for the EATCS Distinguished Dissertation Award 2025. See below and here for details. 

Do spread the news and encourage worthy candidates to submit their theses for this accolade!

-----------------------------

The EATCS establishes the Distinguished Dissertation Award to promote and
recognize outstanding dissertations in the field of Theoretical Computer
Science.

Any PhD dissertation in the field of Theoretical Computer Science that has been
successfully defended in 2025 is eligible.

Up to three dissertations will be selected by the committee for year 2025. The
dissertations will be evaluated on the basis of originality and potential impact
on their respective fields and on Theoretical Computer Science.

Each of the selected dissertations will receive a prize of 1000 Euro. The award
receiving dissertations will be published on the EATCS web site, where all the
EATCS Distinguished Dissertations will be collected.

The dissertation must be submitted by the author as an attachment to an email
message sent to the address dissertation-award@eatcs.org with subject EATCS
Distinguished Dissertation Award 2025 by February 28th, 2026 (CET). The deadline
is strict and late submissions will not be considered.

The body of the message must specify: Name and email address of the candidate;
Title of the dissertation; Department that has awarded the PhD and denomination
of the PhD program; Name and email address of the thesis supervisor; Date of the
successful defence of the thesis.

A five-page abstract of the dissertation and a letter by the thesis supervisor
certifying that the thesis has been successfully defended must also be included.
In addition, an endorsement letter from the thesis supervisor, and possibly one
more endorsement letter, must be sent by the endorsers as attachments to an
email message sent to the address dissertation-award@eatcs.org with subject
EATCS DDA 2025 endorsement. The name of the candidate should be clearly
specified in the message.

The dissertations will be selected by the following committee:

Standa Zivny (chair)
Petra Berenbrink
Loukas Georgiadis
Kasper Green Larsen
Emanuela Merelli

The award committee will solicit the opinion of members of the research
community as appropriate. Dissertations supervised by members of the selection
committee are not eligible. The EATCS is committed to equal opportunities, and
welcomes submissions of outstanding theses from all authors.

By Luca Aceto

Optimal Proximity Gap for Folded Reed--Solomon Codes via Subspace Designs

from arXiv: Computational Complexity

Authors: Fernando Granha Jeronimo, Lenny Liu, Pranav Rajpal

A collection of sets satisfies a $(δ,\varepsilon)$-proximity gap with respect to some property if for every set in the collection, either (i) all members of the set are $δ$-close to the property in (relative) Hamming distance, or (ii) only a small $\varepsilon$-fraction of members are $δ$-close to the property. In a seminal work, Ben-Sasson \textit{et al.}\ showed that the collection of affine subspaces exhibits a $(δ,\varepsilon)$-proximity gap with respect to the property of being Reed--Solomon (RS) codewords with $δ$ up to the so-called Johnson bound for list decoding. Their technique relies on the Guruswami--Sudan list decoding algorithm for RS codes, which is guaranteed to work in the Johnson bound regime. Folded Reed--Solomon (FRS) codes are known to achieve the optimal list decoding radius $δ$, a regime known as capacity. Moreover, a rich line of list decoding algorithms was developed for FRS codes. It is then natural to ask if FRS codes can be shown to exhibit an analogous $(δ,\varepsilon)$-proximity gap, but up to the so-called optimal capacity regime. We answer this question in the affirmative (and the framework naturally applies more generally to suitable subspace-design codes). An additional motivation to understand proximity gaps for FRS codes is the recent results [BCDZ'25] showing that they exhibit properties similar to random linear codes, which were previously shown to be related to properties of RS codes with random evaluation points in [LMS'25], as well as codes over constant-size alphabet based on AEL [JS'25].

Authors: Fernando Granha Jeronimo, Lenny Liu, Pranav Rajpal

A collection of sets satisfies a $(δ,\varepsilon)$-proximity gap with respect to some property if for every set in the collection, either (i) all members of the set are $δ$-close to the property in (relative) Hamming distance, or (ii) only a small $\varepsilon$-fraction of members are $δ$-close to the property. In a seminal work, Ben-Sasson \textit{et al.}\ showed that the collection of affine subspaces exhibits a $(δ,\varepsilon)$-proximity gap with respect to the property of being Reed--Solomon (RS) codewords with $δ$ up to the so-called Johnson bound for list decoding. Their technique relies on the Guruswami--Sudan list decoding algorithm for RS codes, which is guaranteed to work in the Johnson bound regime. Folded Reed--Solomon (FRS) codes are known to achieve the optimal list decoding radius $δ$, a regime known as capacity. Moreover, a rich line of list decoding algorithms was developed for FRS codes. It is then natural to ask if FRS codes can be shown to exhibit an analogous $(δ,\varepsilon)$-proximity gap, but up to the so-called optimal capacity regime. We answer this question in the affirmative (and the framework naturally applies more generally to suitable subspace-design codes). An additional motivation to understand proximity gaps for FRS codes is the recent results [BCDZ'25] showing that they exhibit properties similar to random linear codes, which were previously shown to be related to properties of RS codes with random evaluation points in [LMS'25], as well as codes over constant-size alphabet based on AEL [JS'25].

Correspondences in computational and dynamical complexity II: forcing complex reductions

from arXiv: Computational Complexity

Authors: Samuel Everett

An algebraic telic problem is a decision problem in $\textsf{NP}_\mathbb{R}$ formalizing finite-time reachability questions for one-dimensional dynamical systems. We prove that the existence of "natural" mapping reductions between algebraic telic problems coming from distinct dynamical systems implies the two dynamical systems exhibit similar behavior (in a precise sense). As a consequence, we obtain explicit barriers for algorithms solving algebraic telic problems coming from complex dynamical systems, such as those with positive topological entropy. For example, some telic problems cannot be decided by uniform arithmetic circuit families with only $+$ and $\times$ gates.

Authors: Samuel Everett

An algebraic telic problem is a decision problem in $\textsf{NP}_\mathbb{R}$ formalizing finite-time reachability questions for one-dimensional dynamical systems. We prove that the existence of "natural" mapping reductions between algebraic telic problems coming from distinct dynamical systems implies the two dynamical systems exhibit similar behavior (in a precise sense). As a consequence, we obtain explicit barriers for algorithms solving algebraic telic problems coming from complex dynamical systems, such as those with positive topological entropy. For example, some telic problems cannot be decided by uniform arithmetic circuit families with only $+$ and $\times$ gates.

Constant-Depth Unitary Preparation of Dicke States

from arXiv: Computational Complexity

Authors: Francisca Vasconcelos, Malvika Raj Joshi

Dicke states serve as a critical resource in quantum metrology, communication, and computation. However, unitary preparation of Dicke states is limited to logarithmic depth in standard circuit models and existing constant-depth protocols require measurement and feed-forward. In this work, we present the first unitary, constant-depth protocols for exact Dicke state preparation. We overcome the logarithmic-depth barrier by moving beyond the standard circuit model and leveraging global interactions (native to architectures such as neutral atoms and trapped ions). Specifically, utilizing unbounded CZ gates (i.e. within the QAC$^0$ circuit class), we offer circuits for exact computation of constant-weight Dicke states, using polynomial ancillae, and approximation of weight-1 Dicke states (i.e. $W$ states), using only constant ancillae. Granted additional access to the quantum FAN-OUT operation (i.e. upgrading to the QAC$_f^0$ circuit class), we also achieve exact preparation of arbitrary-weight Dicke states, with polynomial ancillae. These protocols distinguish the constant-depth capabilities of quantum architectures based on connectivity and offer a novel path toward resolving a long-standing quantum complexity conjecture.

Authors: Francisca Vasconcelos, Malvika Raj Joshi

Dicke states serve as a critical resource in quantum metrology, communication, and computation. However, unitary preparation of Dicke states is limited to logarithmic depth in standard circuit models and existing constant-depth protocols require measurement and feed-forward. In this work, we present the first unitary, constant-depth protocols for exact Dicke state preparation. We overcome the logarithmic-depth barrier by moving beyond the standard circuit model and leveraging global interactions (native to architectures such as neutral atoms and trapped ions). Specifically, utilizing unbounded CZ gates (i.e. within the QAC$^0$ circuit class), we offer circuits for exact computation of constant-weight Dicke states, using polynomial ancillae, and approximation of weight-1 Dicke states (i.e. $W$ states), using only constant ancillae. Granted additional access to the quantum FAN-OUT operation (i.e. upgrading to the QAC$_f^0$ circuit class), we also achieve exact preparation of arbitrary-weight Dicke states, with polynomial ancillae. These protocols distinguish the constant-depth capabilities of quantum architectures based on connectivity and offer a novel path toward resolving a long-standing quantum complexity conjecture.

Scalable Algorithms for Approximate DNF Model Counting

from arXiv: Data Structures and Algorithms

Authors: Paul Burkhardt, David G. Harris, Kevin T Schmitt

Model counting of Disjunctive Normal Form (DNF) formulas is a critical problem in applications such as probabilistic inference and network reliability. For example, it is often used for query evaluation in probabilistic databases. Due to the computational intractability of exact DNF counting, there has been a line of research into a variety of approximation algorithms. These include Monte Carlo approaches such as the classical algorithms of Karp, Luby, and Madras (1989), as well as methods based on hashing (Soos et al. 2023), and heuristic approximations based on Neural Nets (Abboud, Ceylan, and Lukasiewicz 2020). We develop a new Monte Carlo approach with an adaptive stopping rule and short-circuit formula evaluation. We prove it achieves Probably Approximately Correct (PAC) learning bounds and is asymptotically more efficient than the previous methods. We also show experimentally that it out-performs prior algorithms by orders of magnitude, and can scale to much larger problems with millions of variables.

Authors: Paul Burkhardt, David G. Harris, Kevin T Schmitt

Model counting of Disjunctive Normal Form (DNF) formulas is a critical problem in applications such as probabilistic inference and network reliability. For example, it is often used for query evaluation in probabilistic databases. Due to the computational intractability of exact DNF counting, there has been a line of research into a variety of approximation algorithms. These include Monte Carlo approaches such as the classical algorithms of Karp, Luby, and Madras (1989), as well as methods based on hashing (Soos et al. 2023), and heuristic approximations based on Neural Nets (Abboud, Ceylan, and Lukasiewicz 2020). We develop a new Monte Carlo approach with an adaptive stopping rule and short-circuit formula evaluation. We prove it achieves Probably Approximately Correct (PAC) learning bounds and is asymptotically more efficient than the previous methods. We also show experimentally that it out-performs prior algorithms by orders of magnitude, and can scale to much larger problems with millions of variables.

Improved Algorithms for Fair Matroid Submodular Maximization

from arXiv: Data Structures and Algorithms

Authors: Sepideh Mahabadi, Sherry Sarkar, Jakub Tarnawski

Submodular maximization subject to matroid constraints is a central problem with many applications in machine learning. As algorithms are increasingly used in decision-making over datapoints with sensitive attributes such as gender or race, it is becoming crucial to enforce fairness to avoid bias and discrimination. Recent work has addressed the challenge of developing efficient approximation algorithms for fair matroid submodular maximization. However, the best algorithms known so far are only guaranteed to satisfy a relaxed version of the fairness constraints that loses a factor 2, i.e., the problem may ask for $\ell$ elements with a given attribute, but the algorithm is only guaranteed to find $\lfloor \ell/2 \rfloor$. In particular, there is no provable guarantee when $\ell=1$, which corresponds to a key special case of perfect matching constraints. In this work, we achieve a new trade-off via an algorithm that gets arbitrarily close to full fairness. Namely, for any constant $\varepsilon>0$, we give a constant-factor approximation to fair monotone matroid submodular maximization that in expectation loses only a factor $(1-\varepsilon)$ in the lower-bound fairness constraint. Our empirical evaluation on a standard suite of real-world datasets -- including clustering, recommendation, and coverage tasks -- demonstrates the practical effectiveness of our methods.

Authors: Sepideh Mahabadi, Sherry Sarkar, Jakub Tarnawski

Submodular maximization subject to matroid constraints is a central problem with many applications in machine learning. As algorithms are increasingly used in decision-making over datapoints with sensitive attributes such as gender or race, it is becoming crucial to enforce fairness to avoid bias and discrimination. Recent work has addressed the challenge of developing efficient approximation algorithms for fair matroid submodular maximization. However, the best algorithms known so far are only guaranteed to satisfy a relaxed version of the fairness constraints that loses a factor 2, i.e., the problem may ask for $\ell$ elements with a given attribute, but the algorithm is only guaranteed to find $\lfloor \ell/2 \rfloor$. In particular, there is no provable guarantee when $\ell=1$, which corresponds to a key special case of perfect matching constraints. In this work, we achieve a new trade-off via an algorithm that gets arbitrarily close to full fairness. Namely, for any constant $\varepsilon>0$, we give a constant-factor approximation to fair monotone matroid submodular maximization that in expectation loses only a factor $(1-\varepsilon)$ in the lower-bound fairness constraint. Our empirical evaluation on a standard suite of real-world datasets -- including clustering, recommendation, and coverage tasks -- demonstrates the practical effectiveness of our methods.

UFO Trees: Practical and Provably-Efficient Parallel Batch-Dynamic Trees

from arXiv: Data Structures and Algorithms

Authors: Quinten De Man, Atharva Sharma, Kishen N Gowda, Laxman Dhulipala

The dynamic trees problem is to maintain a tree under edge updates while supporting queries like connectivity queries or path queries. Despite the first data structure for this fundamental problem -- the link-cut tree -- being invented 40 years ago, our experiments reveal that they are still the fastest sequential data structure for the problem. However, link-cut trees cannot support parallel batch-dynamic updates and have limitations on the kinds of queries they support. In this paper, we design a new parallel batch-dynamic trees data structure called UFO trees that simultaneously supports a wide range of query functionality, supports work-efficient parallel batch-dynamic updates, and is competitive with link-cut trees when run sequentially. We prove that a key reason for the strong practical performance of both link-cut trees and UFO trees is that they can perform updates and queries in sub-logarithmic time for low-diameter trees. We perform an experimental study of our optimized C++ implementations of UFO trees with ten other dynamic tree implementations, several of which are new, in a broad benchmark of both synthetic and real-world trees of varying diameter and size. Our results show that, in both sequential and parallel settings, UFO trees are the fastest dynamic tree data structure that supports a wide range of queries. Our new implementation of UFO trees has low space usage and easily scales to billion-size inputs, making it a promising building block for implementing more complex dynamic graph algorithms in practice.

Authors: Quinten De Man, Atharva Sharma, Kishen N Gowda, Laxman Dhulipala

The dynamic trees problem is to maintain a tree under edge updates while supporting queries like connectivity queries or path queries. Despite the first data structure for this fundamental problem -- the link-cut tree -- being invented 40 years ago, our experiments reveal that they are still the fastest sequential data structure for the problem. However, link-cut trees cannot support parallel batch-dynamic updates and have limitations on the kinds of queries they support. In this paper, we design a new parallel batch-dynamic trees data structure called UFO trees that simultaneously supports a wide range of query functionality, supports work-efficient parallel batch-dynamic updates, and is competitive with link-cut trees when run sequentially. We prove that a key reason for the strong practical performance of both link-cut trees and UFO trees is that they can perform updates and queries in sub-logarithmic time for low-diameter trees. We perform an experimental study of our optimized C++ implementations of UFO trees with ten other dynamic tree implementations, several of which are new, in a broad benchmark of both synthetic and real-world trees of varying diameter and size. Our results show that, in both sequential and parallel settings, UFO trees are the fastest dynamic tree data structure that supports a wide range of queries. Our new implementation of UFO trees has low space usage and easily scales to billion-size inputs, making it a promising building block for implementing more complex dynamic graph algorithms in practice.

Thursday, January 15

TR26-004 | Yet Another Proof that $BPP \subseteq PH$ | Ilya Volkovich

from ECCC Papers

We present a new, simplified proof that the complexity class BPP is contained in the Polynomial Hierarchy (PH), using $k$-wise independent hashing as the main tool. We further extend this approach to recover several other previously known inclusions between complexity classes. Our techniques are inspired by the work of Bellare, Goldreich, and Petrank (Information and Computation, 2000).

We present a new, simplified proof that the complexity class BPP is contained in the Polynomial Hierarchy (PH), using $k$-wise independent hashing as the main tool. We further extend this approach to recover several other previously known inclusions between complexity classes. Our techniques are inspired by the work of Bellare, Goldreich, and Petrank (Information and Computation, 2000).

STOC 2026 Call for Workshops

from Theory Dish: Stanford Blog

TheoryFest 2026 will hold workshops during the STOC 2026 conference week, June 22–26, 2026, in Salt Lake City, Utah, USA. We invite groups of interested researchers to submit workshop proposals! See here for more details: acm-stoc.org/stoc2026/callforworkshops.html Submission deadline: March 6, 2026.

TheoryFest 2026 will hold workshops during the STOC 2026 conference week, June 22–26, 2026, in Salt Lake City, Utah, USA. We invite groups of interested researchers to submit workshop proposals! See here for more details: https://acm-stoc.org/stoc2026/callforworkshops.html

Submission deadline: March 6, 2026.

By mkwoot

STOC 2026 Workshops

from CS Theory Events

June 22-26, 2026 Salt Lake City, Utah, USA acm-stoc.org/stoc2026/callforworkshops.html Submission deadline: March 6, 2026 TheoryFest 2026 will hold workshops during the STOC 2026 conference week, June 22–26, 2026. We invite groups of interested researchers to submit workshop proposals.

By shacharlovett

June 22-26, 2026 Salt Lake City, Utah, USA https://acm-stoc.org/stoc2026/callforworkshops.html Submission deadline: March 6, 2026 TheoryFest 2026 will hold workshops during the STOC 2026 conference week, June 22–26, 2026. We invite groups of interested researchers to submit workshop proposals.

By shacharlovett

Linkage

from David Eppstein

Bridges between descriptive set theory and the theory of distributed computing through coloring geometric graphs (\(\mathbb{M}\)), Quanta, based on Anton Bernshteyn’s “Distributed algorithms, the Lovász Local Lemma, and descriptive combinatorics”, Inventiones Math. 2023.

By David Eppstein

1000 Hurts

from Ben Recht

Psychophysics is a human-facing science with interventions arguably more robust than medicine.

Anyone who’s read enough argmin dot net knows that I often write posts trolling for information. Monday’s post falls into that category, because I am legitimately curious about which human-facing sciences routinely find robust effects. And I got a legitimate, definitive answer from many passionate readers and friends: psychophysics. This is a compelling “exception to prove the rule” in human-facing science.

Psychophysics is the study of how humans perceive and process physical stimuli like light and sound. It is the science that maps out the limits of what we can perceive. Remarkably it has cataloged a long list of universal, robust principles of perception.

I feel dumb for not making this connection before because I actually know a lot of psychophysics. With the naive hope of making a career in music, I spent much of my youth studying the psychophysics of auditory perception. This subfield is so vital that it gets its own name: psychoacoustics. We can’t hear frequencies about 22,000 Hz, which is why we can’t hear dog whistles. We perceive loudness logarithmically, which is why we invented the decibel scale. There are more subtle effects, such as masking, in which a perceptible sound vanishes when played simultaneously with another sound of suitably higher volume. Even these curious, subtle effects are robustly replicable.

The laws of psychoacoustics are so robust and stable that we can build an entire engineering discipline on top of them. We remove the humans from the story and express the principles as mathematical equations and numerical tables. Psychology becomes signal processing. We can then translate these formulae into circuit diagrams and digital code. This code tells us how audio can be captured, compressed, decoded, and amplified without perceptual loss. The same principles enable technologies from telephones to concert loudspeakers. We now take for granted the ability to access high-quality audio from an infinitude of artists on our smartphones. Transmission of sound was somewhat more shocking one hundred years ago.

Visual psychophysics is similarly well established, and this field forms the backbone of our capture, compression, and reproduction of imagery. I mentioned a paper on Monday about memory and word confusion, and this had a similar psychophysical feel. We shouldn’t be surprised that they reported a highly significant effect in their replication study. Recent investigations into the psychophysics of sensorimotor control have also found robust principles that may lead to innovations in prosthetics and other human-machine interfaces. Konrad Kording pointed me to the contemporary sensorimotor psychophysics literature and noted that the results are so undeniable that the community rarely reports p-values. He’s right. The error bars are tiny. Because of its maturity and relatively small size, the field of psychophysics perhaps doesn’t find as many sigma interventions as medicine. But it’s still a place where consistent, robust interventions are discovered.

Why psychophysics is so much more robust than “soft” psychology is a question that psychologists have been torturing themselves about for a century. In one of the science reform community’s favorite early papers on the file drawer/publication bias problem, Anthony Greenwald explicitly calls out psychophysics as likely exempt from his considerations in “Consequences of prejudice against the null hypothesis.” Friend-of-the-blog Paul Meehl lists fifteen reasons why soft psychology fails to make progress in his 1978 magnum opus “Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology.

Go through Meehl’s list of 15 issues in soft psychology, and you’ll see none apply to psychophysics. You have very nicely defined quantitative outcomes: the number of objects, the location of a stimulus, the ordering of a list. You have precisely controlled interventions: you can finely vary the presentation stimuli, changing the color, amplitude, relative magnitude of background noise, and so on. Psychophysics often admits strongly predictive mathematical laws that robustly forecast responses of a diverse set of individuals. The quantifications have stable meanings across contexts, and we can transport them into engineering rules.

If anything, psychophysics is far better off scientifically than medicine. Disease pathology is often far more difficult to define in rigid terms. Few pharmaceutical interventions have clean mathematical models of action beyond the simple pharmacodynamics of absorption. Most chemicals are poison. Some chemicals are miracle cures. It’s astounding that we find anything that works at all.

And yet we do.

Subscribe now

By Ben Recht

Diagonalization Without Relativization A Closer Look at the Baker-Gill-Solovay Theorem

from arXiv: Computational Complexity

Authors: Baruch Garcia

We already know that several problems like the inequivalence of P and EXP as well as the undecidability of the acceptance problem and halting problem relativize. However, relativization is a limited tool which cannot separate other complexity classes. What has not been proven explicitly is whether the Turing-recognizability of the acceptance problem relativizes. We will consider an oracle for which R and RE are equivalent; RA = REA, where A is an oracle for the equivalence problem in the class ALL, but not in RE nor co-RE. We will then differentiate between relativization and what we will call "semi-relativization", i.e., separating classes using only the acceptance problem oracle. We argue the separation of R and RE is a fact that only "semi-relativization" proves. We will then "scale down" to the polynomial analog of R and RE, to evade the Baker-Gill-Solovay barrier using "semi-relativized" diagonalization, noting this subtle distinction between diagonalization and relativization. This "polynomial acceptance problem" is then reducible to CIRCUIT-SAT and 3-CNF-SAT proving that these problems are undecidable in polynomial time yet verifiable in polynomial time. "Semi-relativization" does not employ arithmetization to evade the relativization barrier, and so itself evades the algebrization barrier of Aaronson and Wigderson. Finally, since semi-relativization is a non-constructive technique, the natural proofs barrier of Razborov and Rudich is evaded. Thus the separation of R and RE as well as P and NP both do not relativize but do "semi-relativize", evading all three barriers.

Authors: Baruch Garcia

We already know that several problems like the inequivalence of P and EXP as well as the undecidability of the acceptance problem and halting problem relativize. However, relativization is a limited tool which cannot separate other complexity classes. What has not been proven explicitly is whether the Turing-recognizability of the acceptance problem relativizes. We will consider an oracle for which R and RE are equivalent; RA = REA, where A is an oracle for the equivalence problem in the class ALL, but not in RE nor co-RE. We will then differentiate between relativization and what we will call "semi-relativization", i.e., separating classes using only the acceptance problem oracle. We argue the separation of R and RE is a fact that only "semi-relativization" proves. We will then "scale down" to the polynomial analog of R and RE, to evade the Baker-Gill-Solovay barrier using "semi-relativized" diagonalization, noting this subtle distinction between diagonalization and relativization. This "polynomial acceptance problem" is then reducible to CIRCUIT-SAT and 3-CNF-SAT proving that these problems are undecidable in polynomial time yet verifiable in polynomial time. "Semi-relativization" does not employ arithmetization to evade the relativization barrier, and so itself evades the algebrization barrier of Aaronson and Wigderson. Finally, since semi-relativization is a non-constructive technique, the natural proofs barrier of Razborov and Rudich is evaded. Thus the separation of R and RE as well as P and NP both do not relativize but do "semi-relativize", evading all three barriers.

Complexity Thresholds for the Constrained Colored Token Swapping Problem

from arXiv: Computational Complexity

Authors: Davide Bilò, Stefano Leucci, Andrea Martinelli

Consider the following puzzle: a farmland consists of several fields, each occupied by either a farmer, a fox, a chicken, or a caterpillar. Creatures in neighboring fields can swap positions as long as the fox avoids the farmer, the chicken avoids the fox, and the caterpillar avoids the chicken. The objective is to decide whether there exists a sequence of swaps that rearranges the creatures into a desired final configuration, while avoiding any unwanted encounters. The above puzzle can be cast an instance of the \emph{colored token swapping} problem with $k = 4$ colors (i.e., creature types), in which only certain pairs of colors can be swapped. We prove that such problem is $\mathsf{PSPACE}$-hard even when the graph representing the farmland is planar and cubic. We also show that the problem is polynomial-time solvable when at most three creature types are involved. We do so by providing a more general algorithm deciding instances with arbitrary values of $k$, as long as the set of all admissible swaps between creature types induces a \emph{spanning star}. Our results settle a problem explicitly left open in [Yang and Zhang, IPL 2025], which established $\mathsf{PSPACE}$-completeness for eight creature types and left the complexity status unresolved when the number of creature types is between three and seven.

Authors: Davide Bilò, Stefano Leucci, Andrea Martinelli

Consider the following puzzle: a farmland consists of several fields, each occupied by either a farmer, a fox, a chicken, or a caterpillar. Creatures in neighboring fields can swap positions as long as the fox avoids the farmer, the chicken avoids the fox, and the caterpillar avoids the chicken. The objective is to decide whether there exists a sequence of swaps that rearranges the creatures into a desired final configuration, while avoiding any unwanted encounters. The above puzzle can be cast an instance of the \emph{colored token swapping} problem with $k = 4$ colors (i.e., creature types), in which only certain pairs of colors can be swapped. We prove that such problem is $\mathsf{PSPACE}$-hard even when the graph representing the farmland is planar and cubic. We also show that the problem is polynomial-time solvable when at most three creature types are involved. We do so by providing a more general algorithm deciding instances with arbitrary values of $k$, as long as the set of all admissible swaps between creature types induces a \emph{spanning star}. Our results settle a problem explicitly left open in [Yang and Zhang, IPL 2025], which established $\mathsf{PSPACE}$-completeness for eight creature types and left the complexity status unresolved when the number of creature types is between three and seven.

Wataridori is NP-Complete

from arXiv: Computational Complexity

Authors: Suthee Ruangwises

Wataridori is a pencil puzzle involving drawing paths to connect all circles in a rectangular grid into pairs, in order to satisfy several constraints. In this paper, we prove that deciding solvability of a given Wataridori puzzle is NP-complete via reduction from Numberlink, another pencil puzzle that has already been proved to be NP-complete.

Authors: Suthee Ruangwises

Wataridori is a pencil puzzle involving drawing paths to connect all circles in a rectangular grid into pairs, in order to satisfy several constraints. In this paper, we prove that deciding solvability of a given Wataridori puzzle is NP-complete via reduction from Numberlink, another pencil puzzle that has already been proved to be NP-complete.

Lower Bounds in Algebraic Complexity via Symmetry and Homomorphism Polynomials

from arXiv: Computational Complexity

Authors: Prateek Dwivedi, Benedikt Pago, Tim Seppelt

Valiant's conjecture asserts that the circuit complexity classes VP and VNP are distinct, meaning that the permanent does not admit polynomial-size algebraic circuits. As it is the case in many branches of complexity theory, the unconditional separation of these complexity classes seems elusive. In stark contrast, the symmetric analogue of Valiant's conjecture has been proven by Dawar and Wilsenach (2020): the permanent does not admit symmetric algebraic circuits of polynomial size, while the determinant does. Symmetric algebraic circuits are both a powerful computational model and amenable to proving unconditional lower bounds. In this paper, we develop a symmetric algebraic complexity theory by introducing symmetric analogues of the complexity classes VP, VBP, and VF called symVP, symVBP, and symVF. They comprise polynomials that admit symmetric algebraic circuits, skew circuits, and formulas, respectively, of polynomial orbit size. Having defined these classes, we show unconditionally that $\mathsf{symVF} \subsetneq \mathsf{symVBP} \subsetneq \mathsf{symVP}$. To that end, we characterise the polynomials in symVF and symVBP as those that can be written as linear combinations of homomorphism polynomials for patterns of bounded treedepth and pathwidth, respectively. This extends a previous characterisation by Dawar, Pago, and Seppelt (2026) of symVP. Finally, we show that symVBP and symVP contain homomorphism polynomials which are VBP- and VP-complete, respectively. We give general graph-theoretic criteria for homomorphism polynomials and their linear combinations to be VBP-, VP-, or VNP-complete. These conditional lower bounds drastically enlarge the realm of natural polynomials known to be complete for VNP, VP, or VBP. Under the assumption VFPT $\neq$ VW[1], we precisely identify the homomorphism polynomials that lie in VP as those whose patterns have bounded treewidth.

Authors: Prateek Dwivedi, Benedikt Pago, Tim Seppelt

Valiant's conjecture asserts that the circuit complexity classes VP and VNP are distinct, meaning that the permanent does not admit polynomial-size algebraic circuits. As it is the case in many branches of complexity theory, the unconditional separation of these complexity classes seems elusive. In stark contrast, the symmetric analogue of Valiant's conjecture has been proven by Dawar and Wilsenach (2020): the permanent does not admit symmetric algebraic circuits of polynomial size, while the determinant does. Symmetric algebraic circuits are both a powerful computational model and amenable to proving unconditional lower bounds. In this paper, we develop a symmetric algebraic complexity theory by introducing symmetric analogues of the complexity classes VP, VBP, and VF called symVP, symVBP, and symVF. They comprise polynomials that admit symmetric algebraic circuits, skew circuits, and formulas, respectively, of polynomial orbit size. Having defined these classes, we show unconditionally that $\mathsf{symVF} \subsetneq \mathsf{symVBP} \subsetneq \mathsf{symVP}$. To that end, we characterise the polynomials in symVF and symVBP as those that can be written as linear combinations of homomorphism polynomials for patterns of bounded treedepth and pathwidth, respectively. This extends a previous characterisation by Dawar, Pago, and Seppelt (2026) of symVP. Finally, we show that symVBP and symVP contain homomorphism polynomials which are VBP- and VP-complete, respectively. We give general graph-theoretic criteria for homomorphism polynomials and their linear combinations to be VBP-, VP-, or VNP-complete. These conditional lower bounds drastically enlarge the realm of natural polynomials known to be complete for VNP, VP, or VBP. Under the assumption VFPT $\neq$ VW[1], we precisely identify the homomorphism polynomials that lie in VP as those whose patterns have bounded treewidth.

A $4/3$ ratio approximation algorithm for the Tree Augmentation Problem by deferred local-ratio and climbing

from arXiv: Computational Complexity

Authors: Guy Kortsarz

The \emph{Tree Augmentation Problem (TAP)} is given a tree $T=(V,E_T)$ and additional set of {\em links} $E$ on $V\times V$, find $F \subseteq E$ such that $T \cup F$ is $2$-edge-connected, and $|F|$ is minimum. The problem is APX-hard \cite{r} even in if links are only between leaves \cite{r}. The best known approximation ratio for TAP is $1.393$, due to Traub and Zenklusen~\cite{tr1} J.~ACM,~2025 using the {\em relative greedy} technique \cite{zel}. \noindent We introduce a new technique called the {\em deferred local ratio technique}. In this technique, the disjointness of the local-ratio primal-dual type does not hold. The technique applies Set Cover problem under certain conditions (see Section \ref{lr}). We use it provide a We use it to provide a $4/3$ approximation algorithm for TAP. It is possible this technique will find future applications. The running time is The running time is $O(m\cdot\sqrt{n})$ time \cite{vaz}, \cite{vaz1}. Faster than \cite{tr1} \cite{LS} and LP based algorithms as we do not enumeratestructures of size $exp(Θ(f(1/ε)\cdot \log n)).$ Nor do we scale and round. \noindent \cite{ed} has an implementation \cite{kol} that is extensively used in the industry.

Authors: Guy Kortsarz

The \emph{Tree Augmentation Problem (TAP)} is given a tree $T=(V,E_T)$ and additional set of {\em links} $E$ on $V\times V$, find $F \subseteq E$ such that $T \cup F$ is $2$-edge-connected, and $|F|$ is minimum. The problem is APX-hard \cite{r} even in if links are only between leaves \cite{r}. The best known approximation ratio for TAP is $1.393$, due to Traub and Zenklusen~\cite{tr1} J.~ACM,~2025 using the {\em relative greedy} technique \cite{zel}. \noindent We introduce a new technique called the {\em deferred local ratio technique}. In this technique, the disjointness of the local-ratio primal-dual type does not hold. The technique applies Set Cover problem under certain conditions (see Section \ref{lr}). We use it provide a We use it to provide a $4/3$ approximation algorithm for TAP. It is possible this technique will find future applications. The running time is The running time is $O(m\cdot\sqrt{n})$ time \cite{vaz}, \cite{vaz1}. Faster than \cite{tr1} \cite{LS} and LP based algorithms as we do not enumeratestructures of size $exp(Θ(f(1/ε)\cdot \log n)).$ Nor do we scale and round. \noindent \cite{ed} has an implementation \cite{kol} that is extensively used in the industry.

Correspondences in computational and dynamical complexity I

from arXiv: Computational Complexity

Authors: Samuel Everett

We begin development of a method for studying dynamical systems using concepts from computational complexity theory. We associate families of decision problems, called telic problems, to dynamical systems of a certain class. These decision problems formalize finite-time reachability questions for the dynamics with respect to natural coarse-grainings of state space. Our main result shows that complexity-theoretic lower bounds have dynamical consequences: if a system admits a telic problem for which every decider runs in time $2^{Ω(n)}$, then it must have positive topological entropy. This result and others lead to methods for classifying dynamical systems through proving bounds on the runtime of algorithms solving their associated telic problems, or by constructing polynomial-time reductions between telic problems coming from distinct dynamical systems.

Authors: Samuel Everett

We begin development of a method for studying dynamical systems using concepts from computational complexity theory. We associate families of decision problems, called telic problems, to dynamical systems of a certain class. These decision problems formalize finite-time reachability questions for the dynamics with respect to natural coarse-grainings of state space. Our main result shows that complexity-theoretic lower bounds have dynamical consequences: if a system admits a telic problem for which every decider runs in time $2^{Ω(n)}$, then it must have positive topological entropy. This result and others lead to methods for classifying dynamical systems through proving bounds on the runtime of algorithms solving their associated telic problems, or by constructing polynomial-time reductions between telic problems coming from distinct dynamical systems.

Bounding the interleaving distance on concrete categories using a loss function

from arXiv: Computational Geometry

Authors: Astrid A. Olave, Elizabeth Munch

The interleaving distance is arguably the most widely used metric in topological data analysis (TDA) due to its applicability to a wide array of inputs of interest, such as (multiparameter) persistence modules, Reeb graphs, merge trees, and zigzag modules. However, computation of the interleaving distance in the vast majority of this settings is known to be NP-hard, limiting its use in practical settings. Inspired by the work of Chambers et al. on the interleaving distance for mapper graphs, we solve a more general problem bounding the interleaving distance between generalized persistence modules on concrete categories via a loss function. This loss function measures how far an assignment, which can be thought of as an interleaving that might not commute, is from defining a true interleaving. We give settings for which the loss can be computed in polynomial time, including for certain assumptions on $k$-parameter persistence modules.

Authors: Astrid A. Olave, Elizabeth Munch

The interleaving distance is arguably the most widely used metric in topological data analysis (TDA) due to its applicability to a wide array of inputs of interest, such as (multiparameter) persistence modules, Reeb graphs, merge trees, and zigzag modules. However, computation of the interleaving distance in the vast majority of this settings is known to be NP-hard, limiting its use in practical settings. Inspired by the work of Chambers et al. on the interleaving distance for mapper graphs, we solve a more general problem bounding the interleaving distance between generalized persistence modules on concrete categories via a loss function. This loss function measures how far an assignment, which can be thought of as an interleaving that might not commute, is from defining a true interleaving. We give settings for which the loss can be computed in polynomial time, including for certain assumptions on $k$-parameter persistence modules.

On Numbers of Simplicial Walks and Equivalent Canonizations for Graph Recognition

from arXiv: Data Structures and Algorithms

Authors: Marek Černý

Two graphs are isomorphic exactly when they admit the same number of homomorphisms from every graph. Hence, a graph is recognized up to isomorphism by homomorphism counts over the class of all graphs. Restricting to a specific graph class yields some natural isomorphism relaxations and modulates recognition to particular graph properties. A notable restriction is to the classes of bounded treewidth, yielding the isomorphism relaxation of Weisfeiler--Leman refinement (WL), as shown by Dvořák [JGT 2010]. The properties recognized by WL are exactly those definable in fragments of first-order logic with counting quantifiers, as shown by Cai, Fürer, and Immerman [Comb. 1992]. We characterize the restriction to the classes of bounded pathwidth by numbers of simplicial walks, and formalize it into a refinement procedure (SW). The properties recognized by SW are exactly those definable in fragments of restricted-conjunction first-order logic with counting quantifiers, introduced by Montacute and Shah [LMCS 2024]. Unlike WL, computing SW directly is not polynomial-time in general. We address this by representing SW in terms of multiplicity automata. We equip these automata with an involution, simplifying the canonization to standard forward reduction and omitting the backward one. The resulting canonical form is computable in time $O(kn^{3k})$ for any graph on $n$ vertices and the restriction to pathwidth at most $k$.

Authors: Marek Černý

Two graphs are isomorphic exactly when they admit the same number of homomorphisms from every graph. Hence, a graph is recognized up to isomorphism by homomorphism counts over the class of all graphs. Restricting to a specific graph class yields some natural isomorphism relaxations and modulates recognition to particular graph properties. A notable restriction is to the classes of bounded treewidth, yielding the isomorphism relaxation of Weisfeiler--Leman refinement (WL), as shown by Dvořák [JGT 2010]. The properties recognized by WL are exactly those definable in fragments of first-order logic with counting quantifiers, as shown by Cai, Fürer, and Immerman [Comb. 1992]. We characterize the restriction to the classes of bounded pathwidth by numbers of simplicial walks, and formalize it into a refinement procedure (SW). The properties recognized by SW are exactly those definable in fragments of restricted-conjunction first-order logic with counting quantifiers, introduced by Montacute and Shah [LMCS 2024]. Unlike WL, computing SW directly is not polynomial-time in general. We address this by representing SW in terms of multiplicity automata. We equip these automata with an involution, simplifying the canonization to standard forward reduction and omitting the backward one. The resulting canonical form is computable in time $O(kn^{3k})$ for any graph on $n$ vertices and the restriction to pathwidth at most $k$.

Computational Complexity of Swish

from arXiv: Data Structures and Algorithms

Authors: Takashi Horiyama, Takehiro Ito, Jun Kawahara, Shin-ichi Minato, Akira Suzuki, Ryuhei Uehara, Yutaro Yamaguchi

Swish is a card game in which players are given cards having symbols (hoops and balls), and find a valid superposition of cards, called a "swish." Dailly, Lafourcade, and Marcadet (FUN 2024) studied a generalized version of Swish and showed that the problem is solvable in polynomial time with one symbol per card, while it is NP-complete with three or more symbols per card. In this paper, we resolve the previously open case of two symbols per card, which corresponds to the original game. We show that Swish is NP-complete for this case. Specifically, we prove the NP-hardness when the allowed transformations of cards are restricted to a single (horizontal or vertical) flip or 180-degree rotation, and extend the results to the original setting allowing all three transformations. In contrast, when neither transformation is allowed, we present a polynomial-time algorithm. Combining known and our results, we establish a complete characterization of the computational complexity of Swish with respect to both the number of symbols per card and the allowed transformations.

Authors: Takashi Horiyama, Takehiro Ito, Jun Kawahara, Shin-ichi Minato, Akira Suzuki, Ryuhei Uehara, Yutaro Yamaguchi

Swish is a card game in which players are given cards having symbols (hoops and balls), and find a valid superposition of cards, called a "swish." Dailly, Lafourcade, and Marcadet (FUN 2024) studied a generalized version of Swish and showed that the problem is solvable in polynomial time with one symbol per card, while it is NP-complete with three or more symbols per card. In this paper, we resolve the previously open case of two symbols per card, which corresponds to the original game. We show that Swish is NP-complete for this case. Specifically, we prove the NP-hardness when the allowed transformations of cards are restricted to a single (horizontal or vertical) flip or 180-degree rotation, and extend the results to the original setting allowing all three transformations. In contrast, when neither transformation is allowed, we present a polynomial-time algorithm. Combining known and our results, we establish a complete characterization of the computational complexity of Swish with respect to both the number of symbols per card and the allowed transformations.

On the complexity of global Roman domination problem in graphs

from arXiv: Data Structures and Algorithms

Authors: Sangam Balchandar Reddy, Arun Kumar Das, Anjeneya Swami Kare, I. Vinod Reddy

A Roman dominating function of a graph $G=(V,E)$ is a labeling $f: V \rightarrow{} \{0 ,1, 2\}$ such that for each vertex $u \in V$ with $f(u) = 0$, there exists a vertex $v \in N(u)$ with $f(v) =2$. A Roman dominating function $f$ is a global Roman dominating function if it is a Roman dominating function for both $G$ and its complement $\overline{G}$. The weight of $f$ is the sum of $f(u)$ over all the vertices $u \in V$. The objective of Global Roman Domination problem is to find a global Roman dominating function with minimum weight. The objective of Global Roman Domination is to compute a global Roman dominating function of minimum weight. In this paper, we study the algorithmic aspects of Global Roman Domination problem on various graph classes and obtain the following results. 1. We prove that Roman domination and Global Roman Domination problems are not computationally equivalent by identifying graph classes on which one is linear-time solvable, while the other is NP-complete. 2. We show that Global Roman Domination problem is NP-complete on split graphs, thereby resolving an open question posed by Panda and Goyal [Discrete Applied Mathematics, 2023]. 3. We prove that Global Roman Domination problem is NP-complete on chordal bipartite graphs, planar bipartite graphs with maximum degree five and circle graphs. 4. On the positive side, we present a linear-time algorithm for Global Roman domination problem on cographs.

Authors: Sangam Balchandar Reddy, Arun Kumar Das, Anjeneya Swami Kare, I. Vinod Reddy

A Roman dominating function of a graph $G=(V,E)$ is a labeling $f: V \rightarrow{} \{0 ,1, 2\}$ such that for each vertex $u \in V$ with $f(u) = 0$, there exists a vertex $v \in N(u)$ with $f(v) =2$. A Roman dominating function $f$ is a global Roman dominating function if it is a Roman dominating function for both $G$ and its complement $\overline{G}$. The weight of $f$ is the sum of $f(u)$ over all the vertices $u \in V$. The objective of Global Roman Domination problem is to find a global Roman dominating function with minimum weight. The objective of Global Roman Domination is to compute a global Roman dominating function of minimum weight. In this paper, we study the algorithmic aspects of Global Roman Domination problem on various graph classes and obtain the following results. 1. We prove that Roman domination and Global Roman Domination problems are not computationally equivalent by identifying graph classes on which one is linear-time solvable, while the other is NP-complete. 2. We show that Global Roman Domination problem is NP-complete on split graphs, thereby resolving an open question posed by Panda and Goyal [Discrete Applied Mathematics, 2023]. 3. We prove that Global Roman Domination problem is NP-complete on chordal bipartite graphs, planar bipartite graphs with maximum degree five and circle graphs. 4. On the positive side, we present a linear-time algorithm for Global Roman domination problem on cographs.

Permutation Matching Under Parikh Budgets: Linear-Time Detection, Packing, and Disjoint Selection

from arXiv: Data Structures and Algorithms

Authors: MD Nazmul Alam Shanto, Md. Tanzeem Rahat, Md. Manzurul Hasan

We study permutation (jumbled/Abelian) pattern matching over a general alphabet $Σ$. Given a pattern P of length m and a text T of length n, the classical task is to decide whether T contains a length-m substring whose Parikh vector equals that of P . While this existence problem admits a linear-time sliding-window solution, many practical applications require optimization and packing variants beyond mere detection. We present a unified sliding-window framework based on maintaining the Parikh-vector difference between P and the current window of T , enabling permutation matching in O(n + σ) time and O(σ) space, where σ = |Σ|. Building on this foundation, we introduce a combinatorial-optimization variant that we call Maximum Feasible Substring under Pattern Supply (MFSP): find the longest substring S of T whose symbol counts are component-wise bounded by those of P . We show that MFSP can also be solved in O(n + σ) time via a two-pointer feasibility maintenance algorithm, providing an exact packing interpretation of P as a resource budget. Finally, we address non-overlapping occurrence selection by modeling each permutation match as an equal-length interval and proving that a greedy earliest-finishing strategy yields a maximum-cardinality set of disjoint matches, computable in linear time once all matches are enumerated. Our results provide concise, provably correct algorithms with tight bounds, and connect frequency-based string matching to packing-style optimization primitives.

Authors: MD Nazmul Alam Shanto, Md. Tanzeem Rahat, Md. Manzurul Hasan

We study permutation (jumbled/Abelian) pattern matching over a general alphabet $Σ$. Given a pattern P of length m and a text T of length n, the classical task is to decide whether T contains a length-m substring whose Parikh vector equals that of P . While this existence problem admits a linear-time sliding-window solution, many practical applications require optimization and packing variants beyond mere detection. We present a unified sliding-window framework based on maintaining the Parikh-vector difference between P and the current window of T , enabling permutation matching in O(n + σ) time and O(σ) space, where σ = |Σ|. Building on this foundation, we introduce a combinatorial-optimization variant that we call Maximum Feasible Substring under Pattern Supply (MFSP): find the longest substring S of T whose symbol counts are component-wise bounded by those of P . We show that MFSP can also be solved in O(n + σ) time via a two-pointer feasibility maintenance algorithm, providing an exact packing interpretation of P as a resource budget. Finally, we address non-overlapping occurrence selection by modeling each permutation match as an equal-length interval and proving that a greedy earliest-finishing strategy yields a maximum-cardinality set of disjoint matches, computable in linear time once all matches are enumerated. Our results provide concise, provably correct algorithms with tight bounds, and connect frequency-based string matching to packing-style optimization primitives.

Boltzmann Sampling for Powersets without an Oracle

from arXiv: Data Structures and Algorithms

Authors: Jean Peyen

We show that powersets over structures with a bounded counting sequence can be sampled efficiently without evaluating the generating function. An algorithm is provided, implemented, and tested. Runtimes are comparable to existing Boltzmann samplers reported in the literature.

Authors: Jean Peyen

We show that powersets over structures with a bounded counting sequence can be sampled efficiently without evaluating the generating function. An algorithm is provided, implemented, and tested. Runtimes are comparable to existing Boltzmann samplers reported in the literature.

How many users have been here for a long time? Efficient solutions for counting long aggregated visits

from arXiv: Data Structures and Algorithms

Authors: Peyman Afshani, Rezaul Chowdhury, Inge Li Gørtz, Mayank Goswami, Francesco Silvestri, Mariafiore Tognon

This paper addresses the Counting Long Aggregated Visits problem, which is defined as follows. We are given $n$ users and $m$ regions, where each user spends some time visiting some regions. For a parameter $k$ and a query consisting of a subset of $r$ regions, the task is to count the number of distinct users whose aggregate time spent visiting the query regions is at least $k$. This problem is motivated by queries arising in the analysis of large-scale mobility datasets. We present several exact and approximate data structures for supporting counting long aggregated visits, as well as conditional and unconditional lower bounds. First, we describe an exact data structure that exhibits a space-time tradeoff, as well as efficient approximate solutions based on sampling and sketching techniques. We then study the problem in geometric settings where regions are points in $\mathbb{R}^d$ and queries are hyperrectangles, and derive exact data structures that achieve improved performance in these structured spaces.

Authors: Peyman Afshani, Rezaul Chowdhury, Inge Li Gørtz, Mayank Goswami, Francesco Silvestri, Mariafiore Tognon

This paper addresses the Counting Long Aggregated Visits problem, which is defined as follows. We are given $n$ users and $m$ regions, where each user spends some time visiting some regions. For a parameter $k$ and a query consisting of a subset of $r$ regions, the task is to count the number of distinct users whose aggregate time spent visiting the query regions is at least $k$. This problem is motivated by queries arising in the analysis of large-scale mobility datasets. We present several exact and approximate data structures for supporting counting long aggregated visits, as well as conditional and unconditional lower bounds. First, we describe an exact data structure that exhibits a space-time tradeoff, as well as efficient approximate solutions based on sampling and sketching techniques. We then study the problem in geometric settings where regions are points in $\mathbb{R}^d$ and queries are hyperrectangles, and derive exact data structures that achieve improved performance in these structured spaces.

Engineering Compressed Matrix Multiplication with the Fast Walsh-Hadamard Transform

from arXiv: Data Structures and Algorithms

Authors: Joel Andersson, Matti Karppa

We present an implementation of Pagh's compressed matrix multiplication algorithm, a randomized algorithm that constructs sketches of matrices to compute an unbiased estimate of their product. By leveraging fast polynomial multiplication via the FFT, the algorithm achieves high performance when the product matrix is sparse or contains only a small number of entries with magnitudes significantly larger than the rest. We show empirically that the algorithm is practical and can outperform state-of-the-art DGEMM implementations when the product matrix has few nonzero entries or is otherwise dominated by a small subset of elements with large magnitude. As a minor theoretical contribution, we replace the FFT with the Fast Walsh-Hadamard Transform (FWHT) in sketched multiplication, preserving all correctness and variance guarantees of the original algorithm. Experiments with our carefully engineered multithreaded CPU implementation for dense double-precision matrices on 64-core CPU nodes across a range of synthetic benchmarks, exhibiting variable sparsity patterns, show that the FWHT variant is up to 4 times faster than the FFT-based version. Under favorable sparsity and magnitude patterns in the product matrix, our FWHT-based implementation achieves a speedup of up to 40 over DGEMM from Intel MKL, with low probability of error in the estimates. Our implementation is released as free software and comes with NumPy-compatible Python bindings.

Authors: Joel Andersson, Matti Karppa

We present an implementation of Pagh's compressed matrix multiplication algorithm, a randomized algorithm that constructs sketches of matrices to compute an unbiased estimate of their product. By leveraging fast polynomial multiplication via the FFT, the algorithm achieves high performance when the product matrix is sparse or contains only a small number of entries with magnitudes significantly larger than the rest. We show empirically that the algorithm is practical and can outperform state-of-the-art DGEMM implementations when the product matrix has few nonzero entries or is otherwise dominated by a small subset of elements with large magnitude. As a minor theoretical contribution, we replace the FFT with the Fast Walsh-Hadamard Transform (FWHT) in sketched multiplication, preserving all correctness and variance guarantees of the original algorithm. Experiments with our carefully engineered multithreaded CPU implementation for dense double-precision matrices on 64-core CPU nodes across a range of synthetic benchmarks, exhibiting variable sparsity patterns, show that the FWHT variant is up to 4 times faster than the FFT-based version. Under favorable sparsity and magnitude patterns in the product matrix, our FWHT-based implementation achieves a speedup of up to 40 over DGEMM from Intel MKL, with low probability of error in the estimates. Our implementation is released as free software and comes with NumPy-compatible Python bindings.

Dynamic Hierarchical $j$-Tree Decomposition and Its Applications

from arXiv: Data Structures and Algorithms

Authors: Gramoz Goranci, Monika Henzinger, Peter Kiss, Ali Momeni, Gernot Zöcklein

We develop a new algorithmic framework for designing approximation algorithms for cut-based optimization problems on capacitated undirected graphs that undergo edge insertions and deletions. Specifically, our framework dynamically maintains a variant of the hierarchical $j$-tree decomposition of [Madry FOCS'10], achieving a poly-logarithmic approximation factor to the graph's cut structure and supporting edge updates in $O(n^ε)$ amortized update time, for any arbitrarily small constant $ε\in (0,1)$. Consequently, we obtain new trade-offs between approximation and update/query time for fundamental cut-based optimization problems in the fully dynamic setting, including all-pairs minimum cuts, sparsest cut, multi-way cut, and multi-cut. For the last three problems, these trade-offs give the first fully-dynamic algorithms achieving poly-logarithmic approximation in sub-linear time per operation. The main technical ingredient behind our dynamic hierarchy is a dynamic cut-sparsifier algorithm that can handle vertex splits with low recourse. This is achieved by white-boxing the dynamic cut sparsifier construction of [Abraham et al. FOCS'16], based on forest packing, together with new structural insights about the maintenance of these forests under vertex splits. Given the versatility of cut sparsification in both the static and dynamic graph algorithms literature, we believe this construction may be of independent interest.

Authors: Gramoz Goranci, Monika Henzinger, Peter Kiss, Ali Momeni, Gernot Zöcklein

We develop a new algorithmic framework for designing approximation algorithms for cut-based optimization problems on capacitated undirected graphs that undergo edge insertions and deletions. Specifically, our framework dynamically maintains a variant of the hierarchical $j$-tree decomposition of [Madry FOCS'10], achieving a poly-logarithmic approximation factor to the graph's cut structure and supporting edge updates in $O(n^ε)$ amortized update time, for any arbitrarily small constant $ε\in (0,1)$. Consequently, we obtain new trade-offs between approximation and update/query time for fundamental cut-based optimization problems in the fully dynamic setting, including all-pairs minimum cuts, sparsest cut, multi-way cut, and multi-cut. For the last three problems, these trade-offs give the first fully-dynamic algorithms achieving poly-logarithmic approximation in sub-linear time per operation. The main technical ingredient behind our dynamic hierarchy is a dynamic cut-sparsifier algorithm that can handle vertex splits with low recourse. This is achieved by white-boxing the dynamic cut sparsifier construction of [Abraham et al. FOCS'16], based on forest packing, together with new structural insights about the maintenance of these forests under vertex splits. Given the versatility of cut sparsification in both the static and dynamic graph algorithms literature, we believe this construction may be of independent interest.

A Grouped Sorting Queue Supporting Dynamic Updates for Timer Management in High-Speed Network Interface Cards

from arXiv: Data Structures and Algorithms

Authors: Zekun Wang, Binghao Yue, Weitao Pan, Jianyi Shi, Yue Hao

With the hardware offloading of network functions, network interface cards (NICs) undertake massive stateful, high-precision, and high-throughput tasks, where timers serve as a critical enabling component. However, existing timer management schemes suffer from heavy software load, low precision, lack of hardware update support, and overflow. This paper proposes two novel operations for priority queues--update and group sorting--to enable hardware timer management. To the best of our knowledge, this work presents the first hardware priority queue to support an update operation through the composition and propagation of basic operations to modify the priorities of elements within the queue. The group sorting mechanism ensures correct timing behavior post-overflow by establishing a group boundary priority to alter the sorting process and element insertion positions. Implemented with a hybrid architecture of a one-dimension (1D) systolic array and shift registers, our design is validated through packet-level simulations for flow table timeout management. Results demonstrate that a 4K-depth, 16-bit timer queue achieves over 500 MHz (175 Mpps, 12 ns precision) in a 28nm process and over 300 MHz (116 Mpps) on an FPGA. Critically, it reduces LUTs and FFs usage by 31% and 25%, respectively, compared to existing designs.

Authors: Zekun Wang, Binghao Yue, Weitao Pan, Jianyi Shi, Yue Hao

With the hardware offloading of network functions, network interface cards (NICs) undertake massive stateful, high-precision, and high-throughput tasks, where timers serve as a critical enabling component. However, existing timer management schemes suffer from heavy software load, low precision, lack of hardware update support, and overflow. This paper proposes two novel operations for priority queues--update and group sorting--to enable hardware timer management. To the best of our knowledge, this work presents the first hardware priority queue to support an update operation through the composition and propagation of basic operations to modify the priorities of elements within the queue. The group sorting mechanism ensures correct timing behavior post-overflow by establishing a group boundary priority to alter the sorting process and element insertion positions. Implemented with a hybrid architecture of a one-dimension (1D) systolic array and shift registers, our design is validated through packet-level simulations for flow table timeout management. Results demonstrate that a 4K-depth, 16-bit timer queue achieves over 500 MHz (175 Mpps, 12 ns precision) in a 28nm process and over 300 MHz (116 Mpps) on an FPGA. Critically, it reduces LUTs and FFs usage by 31% and 25%, respectively, compared to existing designs.

SCaLE: Switching Cost aware Learning and Exploration

from arXiv: Data Structures and Algorithms

Authors: Neelkamal Bhuyan, Debankur Mukherjee, Adam Wierman

This work addresses the fundamental problem of unbounded metric movement costs in bandit online convex optimization, by considering high-dimensional dynamic quadratic hitting costs and $\ell_2$-norm switching costs in a noisy bandit feedback model. For a general class of stochastic environments, we provide the first algorithm SCaLE that provably achieves a distribution-agnostic sub-linear dynamic regret, without the knowledge of hitting cost structure. En-route, we present a novel spectral regret analysis that separately quantifies eigenvalue-error driven regret and eigenbasis-perturbation driven regret. Extensive numerical experiments, against online-learning baselines, corroborate our claims, and highlight statistical consistency of our algorithm.

Authors: Neelkamal Bhuyan, Debankur Mukherjee, Adam Wierman

This work addresses the fundamental problem of unbounded metric movement costs in bandit online convex optimization, by considering high-dimensional dynamic quadratic hitting costs and $\ell_2$-norm switching costs in a noisy bandit feedback model. For a general class of stochastic environments, we provide the first algorithm SCaLE that provably achieves a distribution-agnostic sub-linear dynamic regret, without the knowledge of hitting cost structure. En-route, we present a novel spectral regret analysis that separately quantifies eigenvalue-error driven regret and eigenbasis-perturbation driven regret. Extensive numerical experiments, against online-learning baselines, corroborate our claims, and highlight statistical consistency of our algorithm.

An Almost-Optimal Upper Bound on the Push Number of the Torus Puzzle

from arXiv: Data Structures and Algorithms

Authors: Matteo Caporrella, Stefano Leucci

We study the Torus Puzzle, a solitaire game in which the elements of an input $m \times n$ matrix need to be rearranged into a target configuration via a sequence of unit rotations (i.e., circular shifts) of rows and/or columns. Amano et al.\ proposed a more permissive variant of the above puzzle, where each row and column rotation can shift the involved elements by any amount of positions. The number of rotations needed to solve the puzzle in the original and in the permissive variants of the puzzle are respectively known as the \emph{push number} and the \emph{drag number}, where the latter is always smaller than or equal to the former and admits an existential lower bound of $Ω(mn)$. While this lower bound is matched by an $O(mn)$ upper bound, the push number is not so well understood. Indeed, to the best of our knowledge, only an $O(mn \cdot \max\{ m, n \})$ upper bound is currently known. In this paper, we provide an algorithm that solves the Torus Puzzle using $O(mn \cdot \log \max \{m, n\})$ unit rotations in a model that is more restricted than that of the original puzzle. This implies a corresponding upper bound on the push number and reduces the gap between the known upper and lower bounds from $Θ(\max\{m,n\})$ to $Θ(\log \max\{m, n\})$.

Authors: Matteo Caporrella, Stefano Leucci

We study the Torus Puzzle, a solitaire game in which the elements of an input $m \times n$ matrix need to be rearranged into a target configuration via a sequence of unit rotations (i.e., circular shifts) of rows and/or columns. Amano et al.\ proposed a more permissive variant of the above puzzle, where each row and column rotation can shift the involved elements by any amount of positions. The number of rotations needed to solve the puzzle in the original and in the permissive variants of the puzzle are respectively known as the \emph{push number} and the \emph{drag number}, where the latter is always smaller than or equal to the former and admits an existential lower bound of $Ω(mn)$. While this lower bound is matched by an $O(mn)$ upper bound, the push number is not so well understood. Indeed, to the best of our knowledge, only an $O(mn \cdot \max\{ m, n \})$ upper bound is currently known. In this paper, we provide an algorithm that solves the Torus Puzzle using $O(mn \cdot \log \max \{m, n\})$ unit rotations in a model that is more restricted than that of the original puzzle. This implies a corresponding upper bound on the push number and reduces the gap between the known upper and lower bounds from $Θ(\max\{m,n\})$ to $Θ(\log \max\{m, n\})$.

Continuous Fairness On Data Streams

from arXiv: Data Structures and Algorithms

Authors: Subhodeep Ghosh, Zhihui Du, Angela Bonifati, Manish Kumar, David Bader, Senjuti Basu Roy

We study the problem of enforcing continuous group fairness over windows in data streams. We propose a novel fairness model that ensures group fairness at a finer granularity level (referred to as block) within each sliding window. This formulation is particularly useful when the window size is large, making it desirable to enforce fairness at a finer granularity. Within this framework, we address two key challenges: efficiently monitoring whether each sliding window satisfies block-level group fairness, and reordering the current window as effectively as possible when fairness is violated. To enable real-time monitoring, we design sketch-based data structures that maintain attribute distributions with minimal overhead. We also develop optimal, efficient algorithms for the reordering task, supported by rigorous theoretical guarantees. Our evaluation on four real-world streaming scenarios demonstrates the practical effectiveness of our approach. We achieve millisecond-level processing and a throughput of approximately 30,000 queries per second on average, depending on system parameters. The stream reordering algorithm improves block-level group fairness by up to 95% in certain cases, and by 50-60% on average across datasets. A qualitative study further highlights the advantages of block-level fairness compared to window-level fairness.

Authors: Subhodeep Ghosh, Zhihui Du, Angela Bonifati, Manish Kumar, David Bader, Senjuti Basu Roy

We study the problem of enforcing continuous group fairness over windows in data streams. We propose a novel fairness model that ensures group fairness at a finer granularity level (referred to as block) within each sliding window. This formulation is particularly useful when the window size is large, making it desirable to enforce fairness at a finer granularity. Within this framework, we address two key challenges: efficiently monitoring whether each sliding window satisfies block-level group fairness, and reordering the current window as effectively as possible when fairness is violated. To enable real-time monitoring, we design sketch-based data structures that maintain attribute distributions with minimal overhead. We also develop optimal, efficient algorithms for the reordering task, supported by rigorous theoretical guarantees. Our evaluation on four real-world streaming scenarios demonstrates the practical effectiveness of our approach. We achieve millisecond-level processing and a throughput of approximately 30,000 queries per second on average, depending on system parameters. The stream reordering algorithm improves block-level group fairness by up to 95% in certain cases, and by 50-60% on average across datasets. A qualitative study further highlights the advantages of block-level fairness compared to window-level fairness.

Wednesday, January 14

Dominik Hangleiter’s View Posts on: Has Quantum Advantage Been Achieved?

from Gil Kalai

In a recent post on Quantum Frontiers—the first in a series of three—Dominik Hangleiter discusses the question of whether quantum advantage (also known as “quantum supremacy”) has been achieved. Dominik describes polling audiences of experimental and theoretical physicists (with a … Continue reading →

In a recent post on Quantum Frontiers—the first in a series of three—Dominik Hangleiter discusses the question of whether quantum advantage (also known as “quantum supremacy”) has been achieved.

Dominik describes polling audiences of experimental and theoretical physicists (with a few computer scientists) at recent talks, expecting overwhelming agreement that quantum advantage had already been demonstrated. Instead, he found that less than half of the audience believed this to be the case, despite several experimental claims since 2019 and even earlier quantum simulation results.

These reactions motivated Dominik’s mini-series, whose goal is to argue that existing quantum computers can already perform tasks beyond the reach of classical computers. In the first post, he reviews the notion of quantum advantage, focusing especially on random circuit sampling experiments, and explains that current claims rely on proxies and extrapolations from classically tractable regimes. He announces that Part 2 will examine the evidence for these claims and address skeptical arguments.

From my perspective, the question of whether large-scale quantum computational advantage has been achieved is a very important one and deserves careful scrutiny. (I myself have worked over the years on assessing this question, as well as the more fundamental question of whether quantum supremacy can be achieved at all.)  Related questions—such as progress toward quantum error-correcting codes and the status of Majorana zero modes—are also of great importance.

Finally, there was a remark on Dominik’s post from a researcher in India who works in applications of quantum computing for agriculture, and coverage on a quantum communication blog based in Pakistan. It is heartwarming to see how science through research in quantum computation can bring together scientists from India and Pakistan.

By Gil Kalai

EATCS Fellows 2026 - Call for Nominations

from Luca Aceto

The call for nominations for EATCS Fellows 2026 is out and I copy-paste it below. On behalf of my colleagues in the Fellow Selection Committee 2026, I strongly encourage EATCS members to submit strong nominations.

-----------------

EATCS Fellows 2026 - Call for Nominations

Deadline: Saturday, 28 February 2026

The EATCS Fellows Program is established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be "model citizens" of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community. In particular, the program is committed to broad representation and especially encourages nominations from underrepresented groups and research fields.
In order to be considered by the EATCS Fellows Selection Committee, candidates must be nominated by at least four EATCS Members. Please verify your membership at www.eatcs.org/.

The EATCS Fellows Selection Committee 2026 consists of

Luca Aceto
Orna Kupferman  (chair)
Stefano Leonardi 
Paul Spirakis

INSTRUCTIONS: Proposals for Fellow consideration in 2026 must be submitted by Friday, 28 February 2026, by email to the EATCS Secretary -  secretary@eatcs.org. The subject line of the email should read "EATCS Fellow Nomination - < surname of candidate >".
Please note that all nominees and nominators must be EATCS members. A nomination should consist of details on the items below. It can be co-signed by several EATCS members. Two nomination letters per candidate are welcome. 

1. Name of candidate, candidate’s current affiliation and position, candidate’s email address.

2. Short summary of the candidate’s accomplishments (citation – 25 words or less).

3. The most important contributions that qualify the candidate for the rank of EATCS Fellow according to the following two categories: (3.1) Technical achievements, (3.2) Outstanding service to the TCS community. Please limit your description to at most three pages.

4. Nominator(s): Name(s) Affiliation(s), email (s), and relationship to the candidate.

By Luca Aceto

The call for nominations for EATCS Fellows 2026 is out and I copy-paste it below. On behalf of my colleagues in the Fellow Selection Committee 2026, I strongly encourage EATCS members to submit strong nominations.

-----------------

EATCS Fellows 2026 - Call for Nominations

Deadline: Saturday, 28 February 2026

The EATCS Fellows Program is established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be "model citizens" of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community. In particular, the program is committed to broad representation and especially encourages nominations from underrepresented groups and research fields.

In order to be considered by the EATCS Fellows Selection Committee, candidates must be nominated by at least four EATCS Members. Please verify your membership at http://www.eatcs.org/.

The EATCS Fellows Selection Committee 2026 consists of

Luca Aceto
Orna Kupferman  (chair)
Stefano Leonardi 
Paul Spirakis

INSTRUCTIONS: Proposals for Fellow consideration in 2026 must be submitted by Friday, 28 February 2026, by email to the EATCS Secretary -  secretary@eatcs.org. The subject line of the email should read "EATCS Fellow Nomination - < surname of candidate >".

Please note that all nominees and nominators must be EATCS members. A nomination should consist of details on the items below. It can be co-signed by several EATCS members. Two nomination letters per candidate are welcome. 

1. Name of candidate, candidate’s current affiliation and position, candidate’s email address.

2. Short summary of the candidate’s accomplishments (citation – 25 words or less).

3. The most important contributions that qualify the candidate for the rank of EATCS Fellow according to the following two categories: (3.1) Technical achievements, (3.2) Outstanding service to the TCS community. Please limit your description to at most three pages.

4. Nominator(s): Name(s) Affiliation(s), email (s), and relationship to the candidate.

By Luca Aceto

Rational Functions Solved!

from Computational Complexity

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

By Lance Fortnow

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

By Lance Fortnow

Rational degree is polynomially related to degree

from arXiv: Computational Complexity

Authors: Matt Kovacs-Deak, Daochen Wang, Rain Zimin Yang

We prove that $\mathrm{deg}(f) \leq 2 \, \mathrm{rdeg}(f)^4$ for every Boolean function $f$, where $\mathrm{deg}(f)$ is the degree of $f$ and $\mathrm{rdeg}(f)$ is the rational degree of $f$. This resolves the second of the three open problems stated by Nisan and Szegedy, and attributed to Fortnow, in 1994.

Authors: Matt Kovacs-Deak, Daochen Wang, Rain Zimin Yang

We prove that $\mathrm{deg}(f) \leq 2 \, \mathrm{rdeg}(f)^4$ for every Boolean function $f$, where $\mathrm{deg}(f)$ is the degree of $f$ and $\mathrm{rdeg}(f)$ is the rational degree of $f$. This resolves the second of the three open problems stated by Nisan and Szegedy, and attributed to Fortnow, in 1994.

On the parameterized complexity of the Maker-Breaker domination game

from arXiv: Computational Complexity

Authors: Guillaume Bagan, Mathieu Hilaire, Nacim Oijid, Aline Parreau

Since its introduction as a Maker-Breaker positional game by Duchêne et al. in 2020, the Maker-Breaker domination game has become one of the most studied positional games on vertices. In this game, two players, Dominator and Staller, alternately claim an unclaimed vertex of a given graph G. If at some point the set of vertices claimed by Dominator is a dominating set, she wins; otherwise, i.e. if Staller manages to isolate a vertex by claiming all its closed neighborhood, Staller wins. Given a graph G and a first player, Dominator or Staller must have a winning strategy. We are interested in the computational complexity of determining which player has such a strategy. This problem is known to be PSPACE-complete on bipartite graphs of bounded degree and split graphs; polynomial on cographs, outerplanar graphs, and block graphs; and in NP for interval graphs. In this paper, we consider the parameterized complexity of this game. We start by considering as a parameter the number of moves of both players. We prove that for the general framework of Maker-Breaker positional games in hypergraphs, determining whether Breaker can claim a transversal of the hypergraph in k moves is W[2]-complete, in contrast to the problem of determining whether Maker can claim all the vertices of a hyperedge in k moves, which is known to be W[1]-complete since 2017. These two hardness results are then applied to the Maker-Breaker domination game, proving that it is W[2]-complete to decide if Dominator can dominate the graph in k moves and W[1]-complete to decide if Staller can isolate a vertex in k moves. Next, we provide FPT algorithms for the Maker-Breaker domination game parameterized by the neighborhood diversity, the modular width, the P4-fewness, the distance to cluster, and the feedback edge number.

Authors: Guillaume Bagan, Mathieu Hilaire, Nacim Oijid, Aline Parreau

Since its introduction as a Maker-Breaker positional game by Duchêne et al. in 2020, the Maker-Breaker domination game has become one of the most studied positional games on vertices. In this game, two players, Dominator and Staller, alternately claim an unclaimed vertex of a given graph G. If at some point the set of vertices claimed by Dominator is a dominating set, she wins; otherwise, i.e. if Staller manages to isolate a vertex by claiming all its closed neighborhood, Staller wins. Given a graph G and a first player, Dominator or Staller must have a winning strategy. We are interested in the computational complexity of determining which player has such a strategy. This problem is known to be PSPACE-complete on bipartite graphs of bounded degree and split graphs; polynomial on cographs, outerplanar graphs, and block graphs; and in NP for interval graphs. In this paper, we consider the parameterized complexity of this game. We start by considering as a parameter the number of moves of both players. We prove that for the general framework of Maker-Breaker positional games in hypergraphs, determining whether Breaker can claim a transversal of the hypergraph in k moves is W[2]-complete, in contrast to the problem of determining whether Maker can claim all the vertices of a hyperedge in k moves, which is known to be W[1]-complete since 2017. These two hardness results are then applied to the Maker-Breaker domination game, proving that it is W[2]-complete to decide if Dominator can dominate the graph in k moves and W[1]-complete to decide if Staller can isolate a vertex in k moves. Next, we provide FPT algorithms for the Maker-Breaker domination game parameterized by the neighborhood diversity, the modular width, the P4-fewness, the distance to cluster, and the feedback edge number.

Monte Carlo to Las Vegas for Recursively Composed Functions

from arXiv: Computational Complexity

Authors: Bandar Al-Dhalaan, Shalev Ben-David

For a (possibly partial) Boolean function $f\colon\{0,1\}^n\to\{0,1\}$ as well as a query complexity measure $M$ which maps Boolean functions to real numbers, define the composition limit of $M$ on $f$ by $M^*(f)=\lim_{k\to\infty} M(f^k)^{1/k}$. We study the composition limits of general measures in query complexity. We show this limit converges under reasonable assumptions about the measure. We then give a surprising result regarding the composition limit of randomized query complexity: we show $R_0^*(f)=\max\{R^*(f),C^*(f)\}$. Among other things, this implies that any bounded-error randomized algorithm for recursive 3-majority can be turned into a zero-error randomized algorithm for the same task. Our result extends also to quantum algorithms: on recursively composed functions, a bounded-error quantum algorithm can be converted into a quantum algorithm that finds a certificate with high probability. Along the way, we prove various combinatorial properties of measures and composition limits.

Authors: Bandar Al-Dhalaan, Shalev Ben-David

For a (possibly partial) Boolean function $f\colon\{0,1\}^n\to\{0,1\}$ as well as a query complexity measure $M$ which maps Boolean functions to real numbers, define the composition limit of $M$ on $f$ by $M^*(f)=\lim_{k\to\infty} M(f^k)^{1/k}$. We study the composition limits of general measures in query complexity. We show this limit converges under reasonable assumptions about the measure. We then give a surprising result regarding the composition limit of randomized query complexity: we show $R_0^*(f)=\max\{R^*(f),C^*(f)\}$. Among other things, this implies that any bounded-error randomized algorithm for recursive 3-majority can be turned into a zero-error randomized algorithm for the same task. Our result extends also to quantum algorithms: on recursively composed functions, a bounded-error quantum algorithm can be converted into a quantum algorithm that finds a certificate with high probability. Along the way, we prove various combinatorial properties of measures and composition limits.

Carrying is Hard: Exploring the Gap between Hardness for NP and PSPACE for the Hanano and Jelly no Puzzles

from arXiv: Computational Complexity

Authors: Michael C. Chavrimootoo, Jin Seok Youn

The Hanano Puzzle is a one-player game with irreversible gravity, where the goal is to make colored blocks make contact with flowers of the corresponding color. The game Jelly no Puzzle shares similar mechanics. In general, determining if a given level of each of the two games is solvable is PSPACE-complete. There are also known restrictions under which determining if a level of Jelly no Puzzle is solvable is NP-complete. We find that under the same restrictions, determining if a level of Hanano Puzzle is solvable remains PSPACE-complete. We thus study several restrictions on Hanano, contrast them with known results about Jelly no Puzzle, and posit that the mechanism at the heart of the PSPACE-hardness is the ability for blocks to carry each other.

Authors: Michael C. Chavrimootoo, Jin Seok Youn

The Hanano Puzzle is a one-player game with irreversible gravity, where the goal is to make colored blocks make contact with flowers of the corresponding color. The game Jelly no Puzzle shares similar mechanics. In general, determining if a given level of each of the two games is solvable is PSPACE-complete. There are also known restrictions under which determining if a level of Jelly no Puzzle is solvable is NP-complete. We find that under the same restrictions, determining if a level of Hanano Puzzle is solvable remains PSPACE-complete. We thus study several restrictions on Hanano, contrast them with known results about Jelly no Puzzle, and posit that the mechanism at the heart of the PSPACE-hardness is the ability for blocks to carry each other.

In the Search for Good Neck Cuts

from arXiv: Computational Geometry

Authors: Sam Ruggerio, Sariel Har-Peled

We study the problem of finding neck-like features on a surface. Applications for such cuts include robotics, mesh segmentation, and algorithmic applications. We provide a new definition for a surface bottleneck -- informally, it is the shortest cycle relative to the size of the areas it separates. Inspired by the isoperimetric inequality, we formally define such optimal cuts, study their properties, and present several algorithms inspired by these ideas that work surprisingly well in practice. For examples of our algorithms, see neckcut.space.

Authors: Sam Ruggerio, Sariel Har-Peled

We study the problem of finding neck-like features on a surface. Applications for such cuts include robotics, mesh segmentation, and algorithmic applications. We provide a new definition for a surface bottleneck -- informally, it is the shortest cycle relative to the size of the areas it separates. Inspired by the isoperimetric inequality, we formally define such optimal cuts, study their properties, and present several algorithms inspired by these ideas that work surprisingly well in practice. For examples of our algorithms, see https://neckcut.space.

Symbolic Functional Decomposition: A Reconfiguration Approach

from arXiv: Data Structures and Algorithms

Authors: Mateus de Oliveira Oliveira, Wim Van den Broeck

Functional decomposition is the process of breaking down a function $f$ into a composition $f=g(f_1,\dots,f_k)$ of simpler functions $f_1,\dots,f_k$ belonging to some class $\mathcal{F}$. This fundamental notion can be used to model applications arising in a wide variety of contexts, ranging from machine learning to formal language theory. In this work, we study functional decomposition by leveraging on the notion of functional reconfiguration. In this setting, constraints are imposed not only on the factor functions $f_1,\dots,f_k$ but also on the intermediate functions arising during the composition process. We introduce a symbolic framework to address functional reconfiguration and decomposition problems. In our framework, functions arising during the reconfiguration process are represented symbolically, using ordered binary decision diagrams (OBDDs). The function $g$ used to specify the reconfiguration process is represented by a Boolean circuit $C$. Finally, the function class $\mathcal{F}$ is represented by a second-order finite automaton $\mathcal{A}$. Our main result states that functional reconfiguration, and hence functional decomposition, can be solved in fixed-parameter linear time when parameterized by the width of the input OBDD, by structural parameters associated with the reconfiguration circuit $C$, and by the size of the second-order finite automaton $\mathcal{A}$.

Authors: Mateus de Oliveira Oliveira, Wim Van den Broeck

Functional decomposition is the process of breaking down a function $f$ into a composition $f=g(f_1,\dots,f_k)$ of simpler functions $f_1,\dots,f_k$ belonging to some class $\mathcal{F}$. This fundamental notion can be used to model applications arising in a wide variety of contexts, ranging from machine learning to formal language theory. In this work, we study functional decomposition by leveraging on the notion of functional reconfiguration. In this setting, constraints are imposed not only on the factor functions $f_1,\dots,f_k$ but also on the intermediate functions arising during the composition process. We introduce a symbolic framework to address functional reconfiguration and decomposition problems. In our framework, functions arising during the reconfiguration process are represented symbolically, using ordered binary decision diagrams (OBDDs). The function $g$ used to specify the reconfiguration process is represented by a Boolean circuit $C$. Finally, the function class $\mathcal{F}$ is represented by a second-order finite automaton $\mathcal{A}$. Our main result states that functional reconfiguration, and hence functional decomposition, can be solved in fixed-parameter linear time when parameterized by the width of the input OBDD, by structural parameters associated with the reconfiguration circuit $C$, and by the size of the second-order finite automaton $\mathcal{A}$.

Lower Bounds for Dominating Set in Ball Graphs and for Weighted Dominating Set in Unit-Ball Graphs

from arXiv: Data Structures and Algorithms

Authors: Mark de Berg, Sándor Kisfaludi-Bak

Recently it was shown that many classic graph problems -- Independent Set, Dominating Set, Hamiltonian Cycle, and more -- can be solved in subexponential time on unit-ball graphs. More precisely, these problems can be solved in $2^{O(n^{1-1/d})}$ time on unit-ball graphs in $\mathbb R^d$, which is tight under ETH. The result can be generalized to intersection graphs of similarly-sized fat objects. For Independent Set the same running time can be achieved for non-similarly-sized fat objects, and for the weighted version of the problem. We show that such generalizations most likely are not possible for Dominating Set: assuming ETH, we prove that - there is no algorithm with running time $2^{o(n)}$ for Dominating Set on (non-unit) ball graphs in $\mathbb R^3$; - there is no algorithm with running time $2^{o(n)}$ for Weighted Dominating Set on unit-ball graphs in $\mathbb R^3$; - there is no algorithm with running time $2^{o(n)}$ for Dominating Set, Connected Dominating Set, or Steiner Tree on intersections graphs of arbitrary convex (but non-constant-complexity) objects in the plane.

Authors: Mark de Berg, Sándor Kisfaludi-Bak

Recently it was shown that many classic graph problems -- Independent Set, Dominating Set, Hamiltonian Cycle, and more -- can be solved in subexponential time on unit-ball graphs. More precisely, these problems can be solved in $2^{O(n^{1-1/d})}$ time on unit-ball graphs in $\mathbb R^d$, which is tight under ETH. The result can be generalized to intersection graphs of similarly-sized fat objects. For Independent Set the same running time can be achieved for non-similarly-sized fat objects, and for the weighted version of the problem. We show that such generalizations most likely are not possible for Dominating Set: assuming ETH, we prove that - there is no algorithm with running time $2^{o(n)}$ for Dominating Set on (non-unit) ball graphs in $\mathbb R^3$; - there is no algorithm with running time $2^{o(n)}$ for Weighted Dominating Set on unit-ball graphs in $\mathbb R^3$; - there is no algorithm with running time $2^{o(n)}$ for Dominating Set, Connected Dominating Set, or Steiner Tree on intersections graphs of arbitrary convex (but non-constant-complexity) objects in the plane.

Delaunay Triangulations with Predictions

from arXiv: Data Structures and Algorithms

Authors: Sergio Cabello, Timothy M. Chan, Panos Giannopoulos

We investigate algorithms with predictions in computational geometry, specifically focusing on the basic problem of computing 2D Delaunay triangulations. Given a set $P$ of $n$ points in the plane and a triangulation $G$ that serves as a "prediction" of the Delaunay triangulation, we would like to use $G$ to compute the correct Delaunay triangulation $\textit{DT}(P)$ more quickly when $G$ is "close" to $\textit{DT}(P)$. We obtain a variety of results of this type, under different deterministic and probabilistic settings, including the following: 1. Define $D$ to be the number of edges in $G$ that are not in $\textit{DT}(P)$. We present a deterministic algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n + D\log^3 n)$ time, and a randomized algorithm in $O(n+D\log n)$ expected time, the latter of which is optimal in terms of $D$. 2. Let $R$ be a random subset of the edges of $\textit{DT}(P)$, where each edge is chosen independently with probability $ρ$. Suppose $G$ is any triangulation of $P$ that contains $R$. We present an algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n\log\log n + n\log(1/ρ))$ time with high probability. 3. Define $d_{\mbox{\scriptsize\rm vio}}$ to be the maximum number of points of $P$ strictly inside the circumcircle of a triangle in $G$ (the number is 0 if $G$ is equal to $\textit{DT}(P)$). We present a deterministic algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n\log^*n + n\log d_{\mbox{\scriptsize\rm vio}})$ time. We also obtain results in similar settings for related problems such as 2D Euclidean minimum spanning trees, and hope that our work will open up a fruitful line of future research.

Authors: Sergio Cabello, Timothy M. Chan, Panos Giannopoulos

We investigate algorithms with predictions in computational geometry, specifically focusing on the basic problem of computing 2D Delaunay triangulations. Given a set $P$ of $n$ points in the plane and a triangulation $G$ that serves as a "prediction" of the Delaunay triangulation, we would like to use $G$ to compute the correct Delaunay triangulation $\textit{DT}(P)$ more quickly when $G$ is "close" to $\textit{DT}(P)$. We obtain a variety of results of this type, under different deterministic and probabilistic settings, including the following: 1. Define $D$ to be the number of edges in $G$ that are not in $\textit{DT}(P)$. We present a deterministic algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n + D\log^3 n)$ time, and a randomized algorithm in $O(n+D\log n)$ expected time, the latter of which is optimal in terms of $D$. 2. Let $R$ be a random subset of the edges of $\textit{DT}(P)$, where each edge is chosen independently with probability $ρ$. Suppose $G$ is any triangulation of $P$ that contains $R$. We present an algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n\log\log n + n\log(1/ρ))$ time with high probability. 3. Define $d_{\mbox{\scriptsize\rm vio}}$ to be the maximum number of points of $P$ strictly inside the circumcircle of a triangle in $G$ (the number is 0 if $G$ is equal to $\textit{DT}(P)$). We present a deterministic algorithm to compute $\textit{DT}(P)$ from $G$ in $O(n\log^*n + n\log d_{\mbox{\scriptsize\rm vio}})$ time. We also obtain results in similar settings for related problems such as 2D Euclidean minimum spanning trees, and hope that our work will open up a fruitful line of future research.

FPT Approximations for Connected Maximum Coverage

from arXiv: Data Structures and Algorithms

Authors: Tanmay Inamdar, Satyabrata Jana, Madhumita Kundu, Daniel Lokshtanov, Saket Saurabh, Meirav Zehavi

We revisit connectivity-constrained coverage through a unifying model, Partial Connected Red-Blue Dominating Set. Given a red-blue bipartite graph $G$ and an auxiliary connectivity graph $G_{conn}$ on red vertices, and integers $k, t$, the task is to find a $k$-sized subset of red vertices that dominates at least $t$ blue vertices, and that induces a connected subgraph in $G_{conn}$. This formulation captures connected variants of Max Coverage, Partial Dominating Set, and Partial Vertex Cover studied in prior literature. After identifying (parameterized) inapproximability results inherited from known problems, we first show that the problem is fixed-parameter tractable by $t$. Furthermore, when the bipartite graph excludes $K_{d,d}$ as a subgraph, we design (resp. efficient) parameterized approximation schemes for approximating $t$ (resp. $k$). Notably, these FPT approximations do not impose any restrictions on $G_{conn}$. Together, these results chart the boundary between hardness and FPT-approximability for connectivity-constrained coverage.

Authors: Tanmay Inamdar, Satyabrata Jana, Madhumita Kundu, Daniel Lokshtanov, Saket Saurabh, Meirav Zehavi

We revisit connectivity-constrained coverage through a unifying model, Partial Connected Red-Blue Dominating Set. Given a red-blue bipartite graph $G$ and an auxiliary connectivity graph $G_{conn}$ on red vertices, and integers $k, t$, the task is to find a $k$-sized subset of red vertices that dominates at least $t$ blue vertices, and that induces a connected subgraph in $G_{conn}$. This formulation captures connected variants of Max Coverage, Partial Dominating Set, and Partial Vertex Cover studied in prior literature. After identifying (parameterized) inapproximability results inherited from known problems, we first show that the problem is fixed-parameter tractable by $t$. Furthermore, when the bipartite graph excludes $K_{d,d}$ as a subgraph, we design (resp. efficient) parameterized approximation schemes for approximating $t$ (resp. $k$). Notably, these FPT approximations do not impose any restrictions on $G_{conn}$. Together, these results chart the boundary between hardness and FPT-approximability for connectivity-constrained coverage.

How Hard Is It to Rig a Tournament When Few Players Can Beat or Be Beaten by the Favorite?

from arXiv: Data Structures and Algorithms

Authors: Zhonghao Wang, Junqiang Peng, Yuxi Liu, Mingyu Xiao

In knockout tournaments, players compete in successive rounds, with losers eliminated and winners advancing until a single champion remains. Given a tournament digraph $D$, which encodes the outcomes of all possible matches, and a designated player $v^* \in V(D)$, the \textsc{Tournament Fixing} problem (TFP) asks whether the tournament can be scheduled in a way that guarantees $v^*$ emerges as the winner. TFP is known to be NP-hard, but is fixed-parameter tractable (FPT) when parameterized by structural measures such as the feedback arc set (fas) or feedback vertex set (fvs) number of the tournament digraph. In this paper, we introduce and study two new structural parameters: the number of players who can defeat $v^*$ (i.e., the in-degree of $v^*$, denoted by $k$) and the number of players that $v^*$ can defeat (i.e., the out-degree of $v^*$, denoted by $\ell$). A natural question is that: can TFP be efficiently solved when $k$ or $\ell$ is small? We answer this question affirmatively by showing that TFP is FPT when parameterized by either the in-degree or out-degree of $v^*$. Our algorithm for the in-degree parameterization is particularly involved and technically intricate. Notably, the in-degree $k$ can remain small even when other structural parameters, such as fas or fvs, are large. Hence, our results offer a new perspective and significantly broaden the parameterized algorithmic understanding of the \textsc{Tournament Fixing} problem.

Authors: Zhonghao Wang, Junqiang Peng, Yuxi Liu, Mingyu Xiao

In knockout tournaments, players compete in successive rounds, with losers eliminated and winners advancing until a single champion remains. Given a tournament digraph $D$, which encodes the outcomes of all possible matches, and a designated player $v^* \in V(D)$, the \textsc{Tournament Fixing} problem (TFP) asks whether the tournament can be scheduled in a way that guarantees $v^*$ emerges as the winner. TFP is known to be NP-hard, but is fixed-parameter tractable (FPT) when parameterized by structural measures such as the feedback arc set (fas) or feedback vertex set (fvs) number of the tournament digraph. In this paper, we introduce and study two new structural parameters: the number of players who can defeat $v^*$ (i.e., the in-degree of $v^*$, denoted by $k$) and the number of players that $v^*$ can defeat (i.e., the out-degree of $v^*$, denoted by $\ell$). A natural question is that: can TFP be efficiently solved when $k$ or $\ell$ is small? We answer this question affirmatively by showing that TFP is FPT when parameterized by either the in-degree or out-degree of $v^*$. Our algorithm for the in-degree parameterization is particularly involved and technically intricate. Notably, the in-degree $k$ can remain small even when other structural parameters, such as fas or fvs, are large. Hence, our results offer a new perspective and significantly broaden the parameterized algorithmic understanding of the \textsc{Tournament Fixing} problem.

Protrusion Decompositions Revisited: Uniform Lossy Kernels for Reducing Treewidth and Linear Kernels for Hitting Disconnected Minors

from arXiv: Data Structures and Algorithms

Authors: Roohani Sharma, Michał Włodarczyk

Let F be a finite family of graphs. In the F-Deletion problem, one is given a graph G and an integer k, and the goal is to find k vertices whose deletion results in a graph with no minor from the family F. This may be regarded as a far-reaching generalization of Vertex Cover and Feedback vertex Set. In their seminal work, Fomin, Lokshtanov, Misra & Saurabh [FOCS 2012] gave a polynomial kernel for this problem when the family F contains a planar graph. As the size of their kernel is g(F) * k^{f(F)}, a natural follow-up question was whether the dependence on F in the exponent of k can be avoided. The answer turned out to be negative: Giannapoulou, Jansen, Lokshtanov & Saurabh [TALG 2017] proved that this is already inevitable for the special case of the Treewidth-d-Deletion problem. In this work, we show that this non-uniformity can be avoided at the expense of a small loss. First, we present a simple 2-approximate kernelization algorithm for Treewidth-d-Deletion with kernel size g(d) * k^5. Next, we show that the approximation factor can be made arbitrarily close to 1, if we settle for a kernelization protocol with O(1) calls to an oracle that solves instances of size bounded by a uniform polynomial in k. We also obtain linear kernels on sparse graph classes when F contains a planar graph, whereas the previously known theorems required all graphs in F to be connected. Specifically, we generalize the kernelization algorithm by Kim, Langer, Paul, Reidl, Rossmanith, Sau & Sikdar [TALG 2015] on graph classes that exclude a topological minor.

Authors: Roohani Sharma, Michał Włodarczyk

Let F be a finite family of graphs. In the F-Deletion problem, one is given a graph G and an integer k, and the goal is to find k vertices whose deletion results in a graph with no minor from the family F. This may be regarded as a far-reaching generalization of Vertex Cover and Feedback vertex Set. In their seminal work, Fomin, Lokshtanov, Misra & Saurabh [FOCS 2012] gave a polynomial kernel for this problem when the family F contains a planar graph. As the size of their kernel is g(F) * k^{f(F)}, a natural follow-up question was whether the dependence on F in the exponent of k can be avoided. The answer turned out to be negative: Giannapoulou, Jansen, Lokshtanov & Saurabh [TALG 2017] proved that this is already inevitable for the special case of the Treewidth-d-Deletion problem. In this work, we show that this non-uniformity can be avoided at the expense of a small loss. First, we present a simple 2-approximate kernelization algorithm for Treewidth-d-Deletion with kernel size g(d) * k^5. Next, we show that the approximation factor can be made arbitrarily close to 1, if we settle for a kernelization protocol with O(1) calls to an oracle that solves instances of size bounded by a uniform polynomial in k. We also obtain linear kernels on sparse graph classes when F contains a planar graph, whereas the previously known theorems required all graphs in F to be connected. Specifically, we generalize the kernelization algorithm by Kim, Langer, Paul, Reidl, Rossmanith, Sau & Sikdar [TALG 2015] on graph classes that exclude a topological minor.

Kantorovich Distance via Spanning Trees: Properties and Algorithms

from arXiv: Data Structures and Algorithms

Authors: Jérémie Bigot, Luis Fredes

We study optimal transport between probability measures supported on the same finite metric space, where the ground cost is a distance induced by a weighted connected graph. Building on recent work showing that the resulting Kantorovich distance can be expressed as a minimization problem over the set of spanning trees of this underlying graph, we investigate the implications of this reformulation on the construction of an optimal transport plan and a dual potential based on the solution of such an optimization problem. In this setting, we derive an explicit formula for the Kantorovich potential in terms of the imbalanced cumulative mass (a generalization of the cumulative distribution in R) along an optimal spanning tree solving such a minimization problem, under a weak non-degeneracy condition on the pair of measures that guarantees the uniqueness of a dual potential. Our second contribution establishes the existence of an optimal transport plan that can be computed efficiently by a dynamic programming procedure once an optimal spanning tree is known. Finally, we propose a stochastic algorithm based on simulated annealing on the space of spanning trees to compute such an optimal spanning tree. Numerical experiments illustrate the theoretical results and demonstrate the practical relevance of the proposed approach for optimal transport on finite metric spaces.

Authors: Jérémie Bigot, Luis Fredes

We study optimal transport between probability measures supported on the same finite metric space, where the ground cost is a distance induced by a weighted connected graph. Building on recent work showing that the resulting Kantorovich distance can be expressed as a minimization problem over the set of spanning trees of this underlying graph, we investigate the implications of this reformulation on the construction of an optimal transport plan and a dual potential based on the solution of such an optimization problem. In this setting, we derive an explicit formula for the Kantorovich potential in terms of the imbalanced cumulative mass (a generalization of the cumulative distribution in R) along an optimal spanning tree solving such a minimization problem, under a weak non-degeneracy condition on the pair of measures that guarantees the uniqueness of a dual potential. Our second contribution establishes the existence of an optimal transport plan that can be computed efficiently by a dynamic programming procedure once an optimal spanning tree is known. Finally, we propose a stochastic algorithm based on simulated annealing on the space of spanning trees to compute such an optimal spanning tree. Numerical experiments illustrate the theoretical results and demonstrate the practical relevance of the proposed approach for optimal transport on finite metric spaces.

Derandomizing Matrix Concentration Inequalities from Free Probability

from arXiv: Data Structures and Algorithms

Authors: Robert Wang, Lap Chi Lau, Hong Zhou

Recently, sharp matrix concentration inequalities~\cite{BBvH23,BvH24} were developed using the theory of free probability. In this work, we design polynomial time deterministic algorithms to construct outcomes that satisfy the guarantees of these inequalities. As direct consequences, we obtain polynomial time deterministic algorithms for the matrix Spencer problem~\cite{BJM23} and for constructing near-Ramanujan graphs. Our proofs show that the concepts and techniques in free probability are useful not only for mathematical analyses but also for efficient computations.

Authors: Robert Wang, Lap Chi Lau, Hong Zhou

Recently, sharp matrix concentration inequalities~\cite{BBvH23,BvH24} were developed using the theory of free probability. In this work, we design polynomial time deterministic algorithms to construct outcomes that satisfy the guarantees of these inequalities. As direct consequences, we obtain polynomial time deterministic algorithms for the matrix Spencer problem~\cite{BJM23} and for constructing near-Ramanujan graphs. Our proofs show that the concepts and techniques in free probability are useful not only for mathematical analyses but also for efficient computations.

Tuesday, January 13

A Ten-Year-Old Video about Larry Guth and Netz Katz.

from Gil Kalai

Facebook recently reminded of a videotaped lecture I gave ten years ago on Larry Guth and Nets Katz. In that lecture, I discussed their famous joint work on the Erdős distinct distances problem, as well as some of their individual … Continue reading →

Facebook recently reminded of a videotaped lecture I gave ten years ago on Larry Guth and Nets Katz. In that lecture, I discussed their famous joint work on the Erdős distinct distances problem, as well as some of their individual contributions.

Many of the problems mentioned there—relating to the Kakeya problem, the Navier–Stokes equations, systolic inequalities, quantum error-correcting codes, and sum–product theorems—have led to fascinating research over the past decade. Alex Lubotzky and Noam Solomon also made short but forceful guest appearances in the video.

From the news

A few days ago my, wife showed me an article (in Hebrew) reporting a startling new development in mathematics and asked whether I knew about the result and the two mathematicians involved.

I did not recognize either of the two mathematicians from the accompanying picture, nor could I immediately identify the mathematical problem they were studying. But once I read the text, things became clearer: the reported progress concerned the Riemann Hypothesis, and the two mathematicians were Larry Guth and James Maynard—both of whom I know quite well (or at least, I thought I did).

Still, they looked rather different from how I remembered them, and I could not tell who was who. A bit of further investigation resolved the puzzle: the image was AI-generated picture of two mathematicians discussing a mathematical problem. Apparently, in the age of AI, mathematical breakthroughs are still achieved by real people—but their photographs are optional.

AI and mathematics

I recently asked on MathOverflow about AI applications in mathematics and received interesting answers.

By Gil Kalai