Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Friday, May 16

Minimalistic combat, and Sifu

from Emanuele Viola

Click to play the game scratch.mit.edu/projects/1172034203/ These days I am playing Sifu, on a PS5, on a huge projected screen. The game is so amazing that it is easy to forget all that I am about to write and just enjoy the game. Still, it is sad that such a great game hasn’t quite been […]

Click to play the game https://scratch.mit.edu/projects/1172034203/

These days I am playing Sifu, on a PS5, on a huge projected screen. The game is so amazing that it is easy to forget all that I am about to write and just enjoy the game.

Still, it is sad that such a great game hasn’t quite been able to revolutionize combat systems. Some obviously annoying things:

  1. Enemies perform combos in the air. The enemy starts a combo, I dash 20 feet away, and then the enemy still has to finish the combo hitting the air? That’s even more unrealistic than me taking on 20 bad dudes armed with machetes.
  2. Your frenetic opponents suddenly freeze just to watch you execute a lengthy and complex finish combo.
  3. More generally, the AI is average (i.e., poor): the enemies are very predictable, even though the game isn’t easy (When I played God of War some years ago, I set the difficulty to maximum and beat it straight, without thinking twice. I tried to play “maestro” with Sifu but I was forced to go back to “normal” difficulty… at least for the time being.)

This has reminded me of one project that I have been thinking about forever, on and off, without ever giving it any serious thought. If you are a student who like me likes gaming, especially combat games, and is looking for a crazy project, this could be for you.

An obvious approach is to use some heavy hammer like deep learning and see if you can get better AI for the characters with their existing moves. I am not sure to what extent this has been tried and I’d be interested in references. But here I am looking for something a little different.

In short, I ask what is a simplest combat system where people can be consistently ranked and exhibit different gaming strategies; as for example in chess, where people are ranked, and players play differently, some are more aggressive than others, and so on. Also, I’d like to be able to train AI to perform like humans, at various levels and exhibiting different strategies.

I am thinking something extremely basic, like rock-paper-scissors but based less on chance and more on skill and in particular reflexes and speed, or bullet 1-d checkers, but more like combat. Perhaps, a system involving only two moves (high or low attack) and maybe a parry. Skill could be a matter of memory (like, players incrementally build their combos, and the opposite player has to parry in the right sequence) or a matter of reflexes (like, moves get faster and faster, the player with the fastest reflexes will prevail) or a matter of precision (like you need to get exactly the right distance), or obviously some combination of this.

I coded up such a minimalistic combat that you can play at top. As time passes the moves get faster and faster. So it’s your guess when enough time has passed and you think your attack can’t be blocked. The action could look like opponents studying each other, waiting for the right time to strike. You can either parry or strike. Being hit halves (or something) your speed, parry multiplies by 1.5 (or something). Both parrying and attacking cost you speed. First to go below a threshold loses. As always, any comment is very much appreciated.

By Manu

Postdoc positions within the Embedded Systems Engineering section at DTU Compute (apply by August 31, 2025)

from CCI: jobs

DTU Compute invites talented researchers to apply for two fully funded, 2-year postdoctoral positions. We build secure, and reliable cyber-physical systems and lead DTU’s contributions to Shift2SDV. The available positions are: 1. Secure Task Offloading for Software-Defined Vehicles 2. Runtime Analysis and Configuration of Time-Sensitive Networking Website: efzu.fa.em2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_2001/jobs?lastSelectedFacet=ORGANIZATIONS&selectedOrganizationsFacet=300000003721164 Email: paupo@dtu.dk

DTU Compute invites talented researchers to apply for two fully funded, 2-year postdoctoral positions. We build secure, and reliable cyber-physical systems and lead DTU’s contributions to Shift2SDV.

The available positions are:
1. Secure Task Offloading for Software-Defined Vehicles
2. Runtime Analysis and Configuration of Time-Sensitive Networking

Website: https://efzu.fa.em2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_2001/jobs?lastSelectedFacet=ORGANIZATIONS&selectedOrganizationsFacet=300000003721164
Email: paupo@dtu.dk

By shacharlovett

TR25-063 | Algebraic Pseudorandomness in $VNC^0$ | Robert Andrews

from ECCC Papers

We study the arithmetic complexity of hitting set generators, which are pseudorandom objects used for derandomization of the polynomial identity testing problem. We give new explicit constructions of hitting set generators whose outputs are computable in $VNC^0$, i.e., can be computed by arithmetic formulas of constant size. Unconditionally, we construct a $VNC^0$-computable generator that hits arithmetic circuits of constant depth and polynomial size. We also give conditional constructions, under strong but plausible hardness assumptions, of $VNC^0$-computable generators that hit arithmetic formulas and arithmetic branching programs of polynomial size, respectively. As a corollary of our constructions, we derive lower bounds for subsystems of the Geometric Ideal Proof System of Grochow and Pitassi. Constructions of such generators are implicit in prior work of Kayal on lower bounds for the degree of annihilating polynomials. Our main contribution is a construction whose correctness relies on circuit complexity lower bounds rather than degree lower bounds.

We study the arithmetic complexity of hitting set generators, which are pseudorandom objects used for derandomization of the polynomial identity testing problem. We give new explicit constructions of hitting set generators whose outputs are computable in $VNC^0$, i.e., can be computed by arithmetic formulas of constant size. Unconditionally, we construct a $VNC^0$-computable generator that hits arithmetic circuits of constant depth and polynomial size. We also give conditional constructions, under strong but plausible hardness assumptions, of $VNC^0$-computable generators that hit arithmetic formulas and arithmetic branching programs of polynomial size, respectively. As a corollary of our constructions, we derive lower bounds for subsystems of the Geometric Ideal Proof System of Grochow and Pitassi. Constructions of such generators are implicit in prior work of Kayal on lower bounds for the degree of annihilating polynomials. Our main contribution is a construction whose correctness relies on circuit complexity lower bounds rather than degree lower bounds.

On the quantum computational complexity of classical linear dynamics with geometrically local interactions: Dequantization and universality

from arXiv: Computational Complexity

Authors: Kazuki Sakamoto, Keisuke Fujii

The simulation of large-scale classical systems in exponentially small space on quantum computers has gained attention. The prior work demonstrated that a quantum algorithm offers an exponential speedup over any classical algorithm in simulating classical dynamics with long-range interactions. However, many real-world classical systems, such as those arising from partial differential equations, exhibit only local interactions. The question remains whether quantum algorithms can still provide exponential speedup under this condition. In this work, we thoroughly characterize the computational complexity of quantum algorithms for simulating such geometrically local systems. First, we dequantize the quantum algorithm for simulating short-time (polynomial-time) dynamics of such systems. This implies that the problem of simulating this dynamics does not yield any exponential quantum advantage. Second, we show that quantum algorithms for short-time dynamics have the same computational complexity as polynomial-time probabilistic classical computation. Third, we show that the computational complexity of quantum algorithms for long-time (exponential-time) dynamics is captured by exponential-time and polynomial-space quantum computation. This suggests a super-polynomial time advantage when restricting the computation to polynomial-space, or an exponential space advantage otherwise. This work offers new insights into the complexity of classical dynamics governed by partial differential equations, providing a pathway for achieving quantum advantage in practical problems.

Authors: Kazuki Sakamoto, Keisuke Fujii

The simulation of large-scale classical systems in exponentially small space on quantum computers has gained attention. The prior work demonstrated that a quantum algorithm offers an exponential speedup over any classical algorithm in simulating classical dynamics with long-range interactions. However, many real-world classical systems, such as those arising from partial differential equations, exhibit only local interactions. The question remains whether quantum algorithms can still provide exponential speedup under this condition. In this work, we thoroughly characterize the computational complexity of quantum algorithms for simulating such geometrically local systems. First, we dequantize the quantum algorithm for simulating short-time (polynomial-time) dynamics of such systems. This implies that the problem of simulating this dynamics does not yield any exponential quantum advantage. Second, we show that quantum algorithms for short-time dynamics have the same computational complexity as polynomial-time probabilistic classical computation. Third, we show that the computational complexity of quantum algorithms for long-time (exponential-time) dynamics is captured by exponential-time and polynomial-space quantum computation. This suggests a super-polynomial time advantage when restricting the computation to polynomial-space, or an exponential space advantage otherwise. This work offers new insights into the complexity of classical dynamics governed by partial differential equations, providing a pathway for achieving quantum advantage in practical problems.

Multi-Agent Path Finding For Large Agents Is Intractable

from arXiv: Computational Complexity

Authors: Artem Agafonov, Konstantin Yakovlev

The multi-agent path finding (MAPF) problem asks to find a set of paths on a graph such that when synchronously following these paths the agents never encounter a conflict. In the most widespread MAPF formulation, the so-called Classical MAPF, the agents sizes are neglected and two types of conflicts are considered: occupying the same vertex or using the same edge at the same time step. Meanwhile in numerous practical applications, e.g. in robotics, taking into account the agents' sizes is vital to ensure that the MAPF solutions can be safely executed. Introducing large agents yields an additional type of conflict arising when one agent follows an edge and its body overlaps with the body of another agent that is actually not using this same edge (e.g. staying still at some distinct vertex of the graph). Until now it was not clear how harder the problem gets when such conflicts are to be considered while planning. Specifically, it was known that Classical MAPF problem on an undirected graph can be solved in polynomial time, however no complete polynomial-time algorithm was presented to solve MAPF with large agents. In this paper we, for the first time, establish that the latter problem is NP-hard and, thus, if P!=NP no polynomial algorithm for it can, unfortunately, be presented. Our proof is based on the prevalent in the field technique of reducing the seminal 3SAT problem (which is known to be an NP-complete problem) to the problem at hand. In particular, for an arbitrary 3SAT formula we procedurally construct a dedicated graph with specific start and goal vertices and show that the given 3SAT formula is satisfiable iff the corresponding path finding instance has a solution.

Authors: Artem Agafonov, Konstantin Yakovlev

The multi-agent path finding (MAPF) problem asks to find a set of paths on a graph such that when synchronously following these paths the agents never encounter a conflict. In the most widespread MAPF formulation, the so-called Classical MAPF, the agents sizes are neglected and two types of conflicts are considered: occupying the same vertex or using the same edge at the same time step. Meanwhile in numerous practical applications, e.g. in robotics, taking into account the agents' sizes is vital to ensure that the MAPF solutions can be safely executed. Introducing large agents yields an additional type of conflict arising when one agent follows an edge and its body overlaps with the body of another agent that is actually not using this same edge (e.g. staying still at some distinct vertex of the graph). Until now it was not clear how harder the problem gets when such conflicts are to be considered while planning. Specifically, it was known that Classical MAPF problem on an undirected graph can be solved in polynomial time, however no complete polynomial-time algorithm was presented to solve MAPF with large agents. In this paper we, for the first time, establish that the latter problem is NP-hard and, thus, if P!=NP no polynomial algorithm for it can, unfortunately, be presented. Our proof is based on the prevalent in the field technique of reducing the seminal 3SAT problem (which is known to be an NP-complete problem) to the problem at hand. In particular, for an arbitrary 3SAT formula we procedurally construct a dedicated graph with specific start and goal vertices and show that the given 3SAT formula is satisfiable iff the corresponding path finding instance has a solution.

A Fine-Grained Complexity View on Propositional Abduction -- Algorithms and Lower Bounds

from arXiv: Computational Complexity

Authors: Victor Lagerkvist, Mohamed Maizia, Johannes Schmidt

The Boolean satisfiability problem (SAT) is a well-known example of monotonic reasoning, of intense practical interest due to fast solvers, complemented by rigorous fine-grained complexity results. However, for non-monotonic reasoning, e.g., abductive reasoning, comparably little is known outside classic complexity theory. In this paper we take a first step of bridging the gap between monotonic and non-monotonic reasoning by analyzing the complexity of intractable abduction problems under the seemingly overlooked but natural parameter n: the number of variables in the knowledge base. We obtain several positive results for $\Sigma^P_2$- as well as NP- and coNP-complete fragments, which implies the first example of beating exhaustive search for a $\Sigma^P_2$-complete problem (to the best of our knowledge). We complement this with lower bounds and for many fragments rule out improvements under the (strong) exponential-time hypothesis.

Authors: Victor Lagerkvist, Mohamed Maizia, Johannes Schmidt

The Boolean satisfiability problem (SAT) is a well-known example of monotonic reasoning, of intense practical interest due to fast solvers, complemented by rigorous fine-grained complexity results. However, for non-monotonic reasoning, e.g., abductive reasoning, comparably little is known outside classic complexity theory. In this paper we take a first step of bridging the gap between monotonic and non-monotonic reasoning by analyzing the complexity of intractable abduction problems under the seemingly overlooked but natural parameter n: the number of variables in the knowledge base. We obtain several positive results for $\Sigma^P_2$- as well as NP- and coNP-complete fragments, which implies the first example of beating exhaustive search for a $\Sigma^P_2$-complete problem (to the best of our knowledge). We complement this with lower bounds and for many fragments rule out improvements under the (strong) exponential-time hypothesis.

New results in canonical polyadic decomposition over finite fields

from arXiv: Computational Complexity

Authors: Jason Yang

Canonical polyadic decomposition (CPD) is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to $3\times 3$ matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-$R$ CPD of a $n_0\times\dots\times n_{D-1}$-shaped tensor over a finite field $\mathbb{F}$, where $n_0\ge \dots\ge n_{D-1}$, our algorithm saves a multiplicative factor of roughly $|\mathbb{F}|^{R(n_0-1)+n_0(\sum_{d\ge 1} n_d)}$. Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of $3\times 3$ matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.

Authors: Jason Yang

Canonical polyadic decomposition (CPD) is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to $3\times 3$ matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-$R$ CPD of a $n_0\times\dots\times n_{D-1}$-shaped tensor over a finite field $\mathbb{F}$, where $n_0\ge \dots\ge n_{D-1}$, our algorithm saves a multiplicative factor of roughly $|\mathbb{F}|^{R(n_0-1)+n_0(\sum_{d\ge 1} n_d)}$. Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of $3\times 3$ matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.

Topology-driven identification of repetitions in multi-variate time series

from arXiv: Computational Geometry

Authors: Simon Schindler, Elias Steffen Reich, Saverio Messineo, Simon Hoher, Stefan Huber

Many multi-variate time series obtained in the natural sciences and engineering possess a repetitive behavior, as for instance state-space trajectories of industrial machines in discrete automation. Recovering the times of recurrence from such a multi-variate time series is of a fundamental importance for many monitoring and control tasks. For a periodic time series this is equivalent to determining its period length. In this work we present a persistent homology framework to estimate recurrence times in multi-variate time series with different generalizations of cyclic behavior (periodic, repetitive, and recurring). To this end, we provide three specialized methods within our framework that are provably stable and validate them using real-world data, including a new benchmark dataset from an injection molding machine.

Authors: Simon Schindler, Elias Steffen Reich, Saverio Messineo, Simon Hoher, Stefan Huber

Many multi-variate time series obtained in the natural sciences and engineering possess a repetitive behavior, as for instance state-space trajectories of industrial machines in discrete automation. Recovering the times of recurrence from such a multi-variate time series is of a fundamental importance for many monitoring and control tasks. For a periodic time series this is equivalent to determining its period length. In this work we present a persistent homology framework to estimate recurrence times in multi-variate time series with different generalizations of cyclic behavior (periodic, repetitive, and recurring). To this end, we provide three specialized methods within our framework that are provably stable and validate them using real-world data, including a new benchmark dataset from an injection molding machine.

Simpler and Faster Directed Low-Diameter Decompositions

from arXiv: Data Structures and Algorithms

Authors: Jason Li

We present a simpler and faster algorithm for low-diameter decompositions on directed graphs, matching the $O(\log m\log\log m)$ loss factor from Bringmann, Fischer, Haeupler, and Latypov (ICALP 2025) and improving the running time to $O((m+n\log\log n)\log^2m\log\log m)$.

Authors: Jason Li

We present a simpler and faster algorithm for low-diameter decompositions on directed graphs, matching the $O(\log m\log\log m)$ loss factor from Bringmann, Fischer, Haeupler, and Latypov (ICALP 2025) and improving the running time to $O((m+n\log\log n)\log^2m\log\log m)$.

Price of Anarchy for Congestion and Scheduling Games via Vector Fitting

from arXiv: Data Structures and Algorithms

Authors: Danish Kashaev

We provide a dual fitting technique on a semidefinite program yielding simple proofs of tight bounds for the robust price of anarchy of several congestion and scheduling games under the sum of weighted completion times objective. The same approach also allows to bound the approximation ratio of local search algorithms for the scheduling problem $R || \sum w_j C_j$. All of our results are obtained through a simple unified dual fitting argument on the same semidefinite programming relaxation, which can essentially be obtained through the first round of the Lasserre/Sum of Squares hierarchy. As our main application, we show that the known coordination ratio bounds of respectively $4, (3 + \sqrt{5})/2 \approx 2.618,$ and $32/15 \approx 2.133$ for the scheduling game $R || \sum w_j C_j$ under the coordination mechanisms Smith's Rule, Proportional Sharing and Rand (STOC 2011) can be extended to congestion games and obtained through this approach. For the natural restriction where the weight of each player is proportional to its processing time on every resource, we show that the last bound can be improved from 2.133 to 2. This improvement can also be made for general instances when considering the price of anarchy of the game, rather than the coordination ratio. As a further application of the technique, we show that it recovers the tight bound of $(3 + \sqrt{5})/2$ for the price of anarchy of weighted affine congestion games and the Kawaguchi-Kyan bound of $(1+ \sqrt{2})/2$ for the pure price of anarchy of $P || \sum w_j C_j$. In addition, this approach recovers the known tight approximation ratio of $(3 + \sqrt{5})/2 \approx 2.618$ for a natural local search algorithm for $R || \sum w_j C_j$, as well as the best currently known combinatorial approximation algorithm for this problem achieving an approximation ratio of $(5 + \sqrt{5})/4 + \varepsilon \approx 1.809 + \varepsilon$.

Authors: Danish Kashaev

We provide a dual fitting technique on a semidefinite program yielding simple proofs of tight bounds for the robust price of anarchy of several congestion and scheduling games under the sum of weighted completion times objective. The same approach also allows to bound the approximation ratio of local search algorithms for the scheduling problem $R || \sum w_j C_j$. All of our results are obtained through a simple unified dual fitting argument on the same semidefinite programming relaxation, which can essentially be obtained through the first round of the Lasserre/Sum of Squares hierarchy. As our main application, we show that the known coordination ratio bounds of respectively $4, (3 + \sqrt{5})/2 \approx 2.618,$ and $32/15 \approx 2.133$ for the scheduling game $R || \sum w_j C_j$ under the coordination mechanisms Smith's Rule, Proportional Sharing and Rand (STOC 2011) can be extended to congestion games and obtained through this approach. For the natural restriction where the weight of each player is proportional to its processing time on every resource, we show that the last bound can be improved from 2.133 to 2. This improvement can also be made for general instances when considering the price of anarchy of the game, rather than the coordination ratio. As a further application of the technique, we show that it recovers the tight bound of $(3 + \sqrt{5})/2$ for the price of anarchy of weighted affine congestion games and the Kawaguchi-Kyan bound of $(1+ \sqrt{2})/2$ for the pure price of anarchy of $P || \sum w_j C_j$. In addition, this approach recovers the known tight approximation ratio of $(3 + \sqrt{5})/2 \approx 2.618$ for a natural local search algorithm for $R || \sum w_j C_j$, as well as the best currently known combinatorial approximation algorithm for this problem achieving an approximation ratio of $(5 + \sqrt{5})/4 + \varepsilon \approx 1.809 + \varepsilon$.

Improved Rank Aggregation under Fairness Constraint

from arXiv: Data Structures and Algorithms

Authors: Alvin Hong Yao Yan, Diptarka Chakraborty, Himika Das, Sanjana Dey

Aggregating multiple input rankings into a consensus ranking is essential in various fields such as social choice theory, hiring, college admissions, web search, and databases. A major challenge is that the optimal consensus ranking might be biased against individual candidates or groups, especially those from marginalized communities. This concern has led to recent studies focusing on fairness in rank aggregation. The goal is to ensure that candidates from different groups are fairly represented in the top-$k$ positions of the aggregated ranking. We study this fair rank aggregation problem by considering the Kendall tau as the underlying metric. While we know of a polynomial-time approximation scheme (PTAS) for the classical rank aggregation problem, the corresponding fair variant only possesses a quite straightforward 3-approximation algorithm due to Wei et al., SIGMOD'22, and Chakraborty et al., NeurIPS'22, which finds closest fair ranking for each input ranking and then simply outputs the best one. In this paper, we first provide a novel algorithm that achieves $(2+\epsilon)$-approximation (for any $\epsilon > 0$), significantly improving over the 3-approximation bound. Next, we provide a $2.881$-approximation fair rank aggregation algorithm that works irrespective of the fairness notion, given one can find a closest fair ranking, beating the 3-approximation bound. We complement our theoretical guarantee by performing extensive experiments on various real-world datasets to establish the effectiveness of our algorithm further by comparing it with the performance of state-of-the-art algorithms.

Authors: Alvin Hong Yao Yan, Diptarka Chakraborty, Himika Das, Sanjana Dey

Aggregating multiple input rankings into a consensus ranking is essential in various fields such as social choice theory, hiring, college admissions, web search, and databases. A major challenge is that the optimal consensus ranking might be biased against individual candidates or groups, especially those from marginalized communities. This concern has led to recent studies focusing on fairness in rank aggregation. The goal is to ensure that candidates from different groups are fairly represented in the top-$k$ positions of the aggregated ranking. We study this fair rank aggregation problem by considering the Kendall tau as the underlying metric. While we know of a polynomial-time approximation scheme (PTAS) for the classical rank aggregation problem, the corresponding fair variant only possesses a quite straightforward 3-approximation algorithm due to Wei et al., SIGMOD'22, and Chakraborty et al., NeurIPS'22, which finds closest fair ranking for each input ranking and then simply outputs the best one. In this paper, we first provide a novel algorithm that achieves $(2+\epsilon)$-approximation (for any $\epsilon > 0$), significantly improving over the 3-approximation bound. Next, we provide a $2.881$-approximation fair rank aggregation algorithm that works irrespective of the fairness notion, given one can find a closest fair ranking, beating the 3-approximation bound. We complement our theoretical guarantee by performing extensive experiments on various real-world datasets to establish the effectiveness of our algorithm further by comparing it with the performance of state-of-the-art algorithms.

$XX^{t}$ Can Be Faster

from arXiv: Data Structures and Algorithms

Authors: Dmitry Rybin, Yushun Zhang, Zhi-Quan Luo

We present a new algorithm RXTX that computes product of matrix by its transpose $XX^{t}$. RXTX uses $5\%$ less multiplications and additions than State-of-the-Art and achieves accelerations even for small sizes of matrix $X$. The algorithm was discovered by combining Machine Learning-based search methods with Combinatorial Optimization.

Authors: Dmitry Rybin, Yushun Zhang, Zhi-Quan Luo

We present a new algorithm RXTX that computes product of matrix by its transpose $XX^{t}$. RXTX uses $5\%$ less multiplications and additions than State-of-the-Art and achieves accelerations even for small sizes of matrix $X$. The algorithm was discovered by combining Machine Learning-based search methods with Combinatorial Optimization.

High-Temperature Fermionic Gibbs States are Mixtures of Gaussian States

from arXiv: Data Structures and Algorithms

Authors: Akshar Ramkumar, Yiyi Cai, Yu Tong, Jiaqing Jiang

Efficient simulation of a quantum system generally relies on structural properties of the quantum state. Motivated by the recent results by Bakshi et al. on the sudden death of entanglement in high-temperature Gibbs states of quantum spin systems, we study the high-temperature Gibbs states of bounded-degree local fermionic Hamiltonians, which include the special case of geometrically local fermionic systems. We prove that at a sufficiently high temperature that is independent of the system size, the Gibbs state is a probabilistic mixture of fermionic Gaussian states. This forms the basis of an efficient classical algorithm to prepare the Gibbs state by sampling from a distribution of fermionic Gaussian states.

Authors: Akshar Ramkumar, Yiyi Cai, Yu Tong, Jiaqing Jiang

Efficient simulation of a quantum system generally relies on structural properties of the quantum state. Motivated by the recent results by Bakshi et al. on the sudden death of entanglement in high-temperature Gibbs states of quantum spin systems, we study the high-temperature Gibbs states of bounded-degree local fermionic Hamiltonians, which include the special case of geometrically local fermionic systems. We prove that at a sufficiently high temperature that is independent of the system size, the Gibbs state is a probabilistic mixture of fermionic Gaussian states. This forms the basis of an efficient classical algorithm to prepare the Gibbs state by sampling from a distribution of fermionic Gaussian states.

Thursday, May 15

This blog has sunsetted

from Simons Institute Blog

We no longer actively maintain this blog. Please subscribe to our newsletter!

By 956284

We no longer actively maintain this blog. Please subscribe to our newsletter!

By 956284

Linkage

from David Eppstein

Nature on National Science Foundation terminating hundreds of grants and planning for massive budget cuts (\(\mathbb{M}\)). Terence Tao writes “this is going to be hugely disruptive … and in conflict with the congressional authorization of funds appropriated for the NSF”.

By David Eppstein

Phase Transitions in Decision Problems Over Odd-Sized Alphabets

from arXiv: Computational Complexity

Authors: Andrew Jackson

In [A. Jackson, Explaining the ubiquity of phase transitions in decision problems (2025), arXiv:2501.14569], I established that phase transitions are always present in a large subset of decision problems over even-sized alphabets, explaining -- in part -- why phase transitions are seen so often in decision problems. However, decision problems over odd-sized alphabets were not discussed. Here, I correct that oversight, showing that a similar subset of decision problems over odd-sized alphabets also always exhibit phase transitions.

Authors: Andrew Jackson

In [A. Jackson, Explaining the ubiquity of phase transitions in decision problems (2025), arXiv:2501.14569], I established that phase transitions are always present in a large subset of decision problems over even-sized alphabets, explaining -- in part -- why phase transitions are seen so often in decision problems. However, decision problems over odd-sized alphabets were not discussed. Here, I correct that oversight, showing that a similar subset of decision problems over odd-sized alphabets also always exhibit phase transitions.

BusOut is NP-complete

from arXiv: Computational Complexity

Authors: Takehiro Ishibashi, Ryo Yoshinaka, Ayumi Shinohara

This study examines the computational complexity of the decision problem modeled on the smartphone game Bus Out. The objective of the game is to load all the passengers in a queue onto appropriate buses using a limited number of bus parking spots by selecting and dispatching the buses on a map. We show that the problem is NP-complete, even for highly restricted instances. We also show that it is hard to approximate the minimum number of parking spots needed to solve a given instance.

Authors: Takehiro Ishibashi, Ryo Yoshinaka, Ayumi Shinohara

This study examines the computational complexity of the decision problem modeled on the smartphone game Bus Out. The objective of the game is to load all the passengers in a queue onto appropriate buses using a limited number of bus parking spots by selecting and dispatching the buses on a map. We show that the problem is NP-complete, even for highly restricted instances. We also show that it is hard to approximate the minimum number of parking spots needed to solve a given instance.

The Adaptive Complexity of Finding a Stationary Point

from arXiv: Computational Complexity

Authors: Huanjian Zhou, Andi Han, Akiko Takeda, Masashi Sugiyama

In large-scale applications, such as machine learning, it is desirable to design non-convex optimization algorithms with a high degree of parallelization. In this work, we study the adaptive complexity of finding a stationary point, which is the minimal number of sequential rounds required to achieve stationarity given polynomially many queries executed in parallel at each round. For the high-dimensional case, i.e., $d = \widetilde{\Omega}(\varepsilon^{-(2 + 2p)/p})$, we show that for any (potentially randomized) algorithm, there exists a function with Lipschitz $p$-th order derivatives such that the algorithm requires at least $\varepsilon^{-(p+1)/p}$ iterations to find an $\varepsilon$-stationary point. Our lower bounds are tight and show that even with $\mathrm{poly}(d)$ queries per iteration, no algorithm has better convergence rate than those achievable with one-query-per-round algorithms. In other words, gradient descent, the cubic-regularized Newton's method, and the $p$-th order adaptive regularization method are adaptively optimal. Our proof relies upon novel analysis with the characterization of the output for the hardness potentials based on a chain-like structure with random partition. For the constant-dimensional case, i.e., $d = \Theta(1)$, we propose an algorithm that bridges grid search and gradient flow trapping, finding an approximate stationary point in constant iterations. Its asymptotic tightness is verified by a new lower bound on the required queries per iteration. We show there exists a smooth function such that any algorithm running with $\Theta(\log (1/\varepsilon))$ rounds requires at least $\widetilde{\Omega}((1/\varepsilon)^{(d-1)/2})$ queries per round. This lower bound is tight up to a logarithmic factor, and implies that the gradient flow trapping is adaptively optimal.

Authors: Huanjian Zhou, Andi Han, Akiko Takeda, Masashi Sugiyama

In large-scale applications, such as machine learning, it is desirable to design non-convex optimization algorithms with a high degree of parallelization. In this work, we study the adaptive complexity of finding a stationary point, which is the minimal number of sequential rounds required to achieve stationarity given polynomially many queries executed in parallel at each round. For the high-dimensional case, i.e., $d = \widetilde{\Omega}(\varepsilon^{-(2 + 2p)/p})$, we show that for any (potentially randomized) algorithm, there exists a function with Lipschitz $p$-th order derivatives such that the algorithm requires at least $\varepsilon^{-(p+1)/p}$ iterations to find an $\varepsilon$-stationary point. Our lower bounds are tight and show that even with $\mathrm{poly}(d)$ queries per iteration, no algorithm has better convergence rate than those achievable with one-query-per-round algorithms. In other words, gradient descent, the cubic-regularized Newton's method, and the $p$-th order adaptive regularization method are adaptively optimal. Our proof relies upon novel analysis with the characterization of the output for the hardness potentials based on a chain-like structure with random partition. For the constant-dimensional case, i.e., $d = \Theta(1)$, we propose an algorithm that bridges grid search and gradient flow trapping, finding an approximate stationary point in constant iterations. Its asymptotic tightness is verified by a new lower bound on the required queries per iteration. We show there exists a smooth function such that any algorithm running with $\Theta(\log (1/\varepsilon))$ rounds requires at least $\widetilde{\Omega}((1/\varepsilon)^{(d-1)/2})$ queries per round. This lower bound is tight up to a logarithmic factor, and implies that the gradient flow trapping is adaptively optimal.

Sensitivity and Hamming graphs

from arXiv: Computational Complexity

Authors: Sara Asensio, Yuval Filmus, Ignacio García-Marco, Kolja Knauer

For any $m\geq 3$ we show that the Hamming graph $H(n,m)$ admits an imbalanced partition into $m$ sets, each inducing a subgraph of low maximum degree. This improves previous results by Tandya and by Potechin and Tsang, and disproves the Strong $m$-ary Sensitivity Conjecture of Asensio, Garc\'ia-Marco, and Knauer. On the other hand, we prove their weaker $m$-ary Sensitivity Conjecture by showing that the sensitivity of any $m$-ary function is bounded from below by a polynomial expression in its degree.

Authors: Sara Asensio, Yuval Filmus, Ignacio García-Marco, Kolja Knauer

For any $m\geq 3$ we show that the Hamming graph $H(n,m)$ admits an imbalanced partition into $m$ sets, each inducing a subgraph of low maximum degree. This improves previous results by Tandya and by Potechin and Tsang, and disproves the Strong $m$-ary Sensitivity Conjecture of Asensio, Garc\'ia-Marco, and Knauer. On the other hand, we prove their weaker $m$-ary Sensitivity Conjecture by showing that the sensitivity of any $m$-ary function is bounded from below by a polynomial expression in its degree.

Even Faster Algorithm for the Chamfer Distance

from arXiv: Data Structures and Algorithms

Authors: Ying Feng, Piotr Indyk

For two d-dimensional point sets A, B of size up to n, the Chamfer distance from A to B is defined as CH(A,B) = \sum_{a \in A} \min_{b \in B} \|a-b\|. The Chamfer distance is a widely used measure for quantifying dissimilarity between sets of points, used in many machine learning and computer vision applications. A recent work of Bakshi et al, NeuriPS'23, gave the first near-linear time (1+eps)-approximate algorithm, with a running time of O(ndlog(n)/eps^2). In this paper we improve the running time further, to O(nd(loglog(n)+log(1/eps))/eps^2). When eps is a constant, this reduces the gap between the upper bound and the trivial Omega(dn) lower bound significantly, from O(log n) to O(loglog n).

Authors: Ying Feng, Piotr Indyk

For two d-dimensional point sets A, B of size up to n, the Chamfer distance from A to B is defined as CH(A,B) = \sum_{a \in A} \min_{b \in B} \|a-b\|. The Chamfer distance is a widely used measure for quantifying dissimilarity between sets of points, used in many machine learning and computer vision applications. A recent work of Bakshi et al, NeuriPS'23, gave the first near-linear time (1+eps)-approximate algorithm, with a running time of O(ndlog(n)/eps^2). In this paper we improve the running time further, to O(nd(loglog(n)+log(1/eps))/eps^2). When eps is a constant, this reduces the gap between the upper bound and the trivial Omega(dn) lower bound significantly, from O(log n) to O(loglog n).

A Dynamic Working Set Method for Compressed Sensing

from arXiv: Data Structures and Algorithms

Authors: Siu-Wing Cheng, Man Ting Wong

We propose a dynamic working set method (DWS) for the problem $\min_{\mathtt{x} \in \mathbb{R}^n} \frac{1}{2}\|\mathtt{Ax}-\mathtt{b}\|^2 + \eta\|\mathtt{x}\|_1$ that arises from compressed sensing. DWS manages the working set while iteratively calling a regression solver to generate progressively better solutions. Our experiments show that DWS is more efficient than other state-of-the-art software in the context of compressed sensing. Scale space such that $\|b\|=1$. Let $s$ be the number of non-zeros in the unknown signal. We prove that for any given $\varepsilon > 0$, DWS reaches a solution with an additive error $\varepsilon/\eta^2$ such that each call of the solver uses only $O(\frac{1}{\varepsilon}s\log s \log\frac{1}{\varepsilon})$ variables, and each intermediate solution has $O(\frac{1}{\varepsilon}s\log s\log\frac{1}{\varepsilon})$ non-zero coordinates.

Authors: Siu-Wing Cheng, Man Ting Wong

We propose a dynamic working set method (DWS) for the problem $\min_{\mathtt{x} \in \mathbb{R}^n} \frac{1}{2}\|\mathtt{Ax}-\mathtt{b}\|^2 + \eta\|\mathtt{x}\|_1$ that arises from compressed sensing. DWS manages the working set while iteratively calling a regression solver to generate progressively better solutions. Our experiments show that DWS is more efficient than other state-of-the-art software in the context of compressed sensing. Scale space such that $\|b\|=1$. Let $s$ be the number of non-zeros in the unknown signal. We prove that for any given $\varepsilon > 0$, DWS reaches a solution with an additive error $\varepsilon/\eta^2$ such that each call of the solver uses only $O(\frac{1}{\varepsilon}s\log s \log\frac{1}{\varepsilon})$ variables, and each intermediate solution has $O(\frac{1}{\varepsilon}s\log s\log\frac{1}{\varepsilon})$ non-zero coordinates.

Online Bin Packing with Item Size Estimates

from arXiv: Data Structures and Algorithms

Authors: Matthias Gehnen, Andreas Usdenski

Imagine yourself moving to another place, and therefore, you need to pack all of your belongings into moving boxes with some capacity. In the classical bin packing model, you would try to minimize the number of boxes, knowing the exact size of each item you want to pack. In the online bin packing problem, you need to start packing the first item into a box, without knowing what other stuff is upcoming. Both settings are somewhat unrealistic, as you are likely not willing to measure the exact size of all your belongings before packing the first item, but you are not completely clueless about what other stuff you have when you start packing. In this article, we introduce the online bin packing with estimates model, where you start packing with a rough idea about the upcoming item sizes in mind. In this model, an algorithm receives a size estimate for every item in the input list together with an accuracy factor $\delta$ in advance. Just as for regular online bin packing the items are then presented iteratively. The actual sizes of the items are allowed to deviate from the size estimate by a factor of $\delta$. Once the actual size of an item is revealed the algorithm has to make an irrevocable decision on the question where to place it. This is the first time online bin packing is studied under this model. This article has three main results: First, no algorithm can achieve a competitive ratio of less than $\frac{4}{3}$, even for an arbitrary small factor $\delta>0$. Second, we present an algorithm that is $1.5$-competitive for all $\delta \leq \frac{1}{35}$. Finally, we design a strategy that yields a competitive ratio of $\frac{4}{3}$ under the assumption that not more than two items can be placed in the same bin, which is best possible in this setting.

Authors: Matthias Gehnen, Andreas Usdenski

Imagine yourself moving to another place, and therefore, you need to pack all of your belongings into moving boxes with some capacity. In the classical bin packing model, you would try to minimize the number of boxes, knowing the exact size of each item you want to pack. In the online bin packing problem, you need to start packing the first item into a box, without knowing what other stuff is upcoming. Both settings are somewhat unrealistic, as you are likely not willing to measure the exact size of all your belongings before packing the first item, but you are not completely clueless about what other stuff you have when you start packing. In this article, we introduce the online bin packing with estimates model, where you start packing with a rough idea about the upcoming item sizes in mind. In this model, an algorithm receives a size estimate for every item in the input list together with an accuracy factor $\delta$ in advance. Just as for regular online bin packing the items are then presented iteratively. The actual sizes of the items are allowed to deviate from the size estimate by a factor of $\delta$. Once the actual size of an item is revealed the algorithm has to make an irrevocable decision on the question where to place it. This is the first time online bin packing is studied under this model. This article has three main results: First, no algorithm can achieve a competitive ratio of less than $\frac{4}{3}$, even for an arbitrary small factor $\delta>0$. Second, we present an algorithm that is $1.5$-competitive for all $\delta \leq \frac{1}{35}$. Finally, we design a strategy that yields a competitive ratio of $\frac{4}{3}$ under the assumption that not more than two items can be placed in the same bin, which is best possible in this setting.

Structural Parameterization of Steiner Tree Packing

from arXiv: Data Structures and Algorithms

Authors: Niko Hastrich, Kirill Simonov

Steiner Tree Packing (STP) is a notoriously hard problem in classical complexity theory, which is of practical relevance to VLSI circuit design. Previous research has approached this problem by providing heuristic or approximate algorithms. In this paper, we show the first FPT algorithms for STP parameterized by structural parameters of the input graph. In particular, we show that STP is fixed-parameter tractable by the tree-cut width as well as the fracture number of the input graph. To achieve our results, we generalize techniques from Edge-Disjoint Paths (EDP) to Generalized Steiner Tree Packing (GSTP), which generalizes both STP and EDP. First, we derive the notion of the augmented graph for GSTP analogous to EDP. We then show that GSTP is FPT by (1) the tree-cut width of the augmented graph, (2) the fracture number of the augmented graph, (3) the slim tree-cut width of the input graph. The latter two results were previously known for EDP; our results generalize these to GSTP and improve the running time for the parameter fracture number. On the other hand, it was open whether EDP is FPT parameterized by the tree-cut width of the augmented graph, despite extensive research on the structural complexity of the problem. We settle this question affirmatively.

Authors: Niko Hastrich, Kirill Simonov

Steiner Tree Packing (STP) is a notoriously hard problem in classical complexity theory, which is of practical relevance to VLSI circuit design. Previous research has approached this problem by providing heuristic or approximate algorithms. In this paper, we show the first FPT algorithms for STP parameterized by structural parameters of the input graph. In particular, we show that STP is fixed-parameter tractable by the tree-cut width as well as the fracture number of the input graph. To achieve our results, we generalize techniques from Edge-Disjoint Paths (EDP) to Generalized Steiner Tree Packing (GSTP), which generalizes both STP and EDP. First, we derive the notion of the augmented graph for GSTP analogous to EDP. We then show that GSTP is FPT by (1) the tree-cut width of the augmented graph, (2) the fracture number of the augmented graph, (3) the slim tree-cut width of the input graph. The latter two results were previously known for EDP; our results generalize these to GSTP and improve the running time for the parameter fracture number. On the other hand, it was open whether EDP is FPT parameterized by the tree-cut width of the augmented graph, despite extensive research on the structural complexity of the problem. We settle this question affirmatively.

Approximate Cartesian Tree Matching with One Difference

from arXiv: Data Structures and Algorithms

Authors: Bastien Auvray, Julien David, Samah Ghazawi, Richard Groult, Gad M. Landau, Thierry Lecroq

Cartesian tree pattern matching consists of finding all the factors of a text that have the same Cartesian tree than a given pattern. There already exist theoretical and practical solutions for the exact case. In this paper, we propose the first algorithms for solving approximate Cartesian tree pattern matching with one difference given a pattern of length m and a text of length n. We present a generic algorithm that find all the factors of the text that have the same Cartesian tree of the pattern with one difference, using different notions of differences. We show that this algorithm has a O(nM) worst-case complexity and that, for several random models, the algorithm has a linear average-case complexity. We also present an automaton based algorithm, adapting [PALP19], that can be generalized to deal with more than one difference.

Authors: Bastien Auvray, Julien David, Samah Ghazawi, Richard Groult, Gad M. Landau, Thierry Lecroq

Cartesian tree pattern matching consists of finding all the factors of a text that have the same Cartesian tree than a given pattern. There already exist theoretical and practical solutions for the exact case. In this paper, we propose the first algorithms for solving approximate Cartesian tree pattern matching with one difference given a pattern of length m and a text of length n. We present a generic algorithm that find all the factors of the text that have the same Cartesian tree of the pattern with one difference, using different notions of differences. We show that this algorithm has a O(nM) worst-case complexity and that, for several random models, the algorithm has a linear average-case complexity. We also present an automaton based algorithm, adapting [PALP19], that can be generalized to deal with more than one difference.

Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time

from arXiv: Data Structures and Algorithms

Authors: Gramoz Goranci, Peter Kiss, Neel Patel, Martin P. Seybold, Eva Szilagyi, Da Wei Zheng

We consider the Euclidean bi-chromatic matching problem in the dynamic setting, where the goal is to efficiently process point insertions and deletions while maintaining a high-quality solution. Computing the minimum cost bi-chromatic matching is one of the core problems in geometric optimization that has found many applications, most notably in estimating Wasserstein distance between two distributions. In this work, we present the first fully dynamic algorithm for Euclidean bi-chromatic matching with sub-linear update time. For any fixed $\varepsilon > 0$, our algorithm achieves $O(1/\varepsilon)$-approximation and handles updates in $O(n^{\varepsilon})$ time. Our experiments show that our algorithm enables effective monitoring of the distributional drift in the Wasserstein distance on real and synthetic data sets, while outperforming the runtime of baseline approximations by orders of magnitudes.

Authors: Gramoz Goranci, Peter Kiss, Neel Patel, Martin P. Seybold, Eva Szilagyi, Da Wei Zheng

We consider the Euclidean bi-chromatic matching problem in the dynamic setting, where the goal is to efficiently process point insertions and deletions while maintaining a high-quality solution. Computing the minimum cost bi-chromatic matching is one of the core problems in geometric optimization that has found many applications, most notably in estimating Wasserstein distance between two distributions. In this work, we present the first fully dynamic algorithm for Euclidean bi-chromatic matching with sub-linear update time. For any fixed $\varepsilon > 0$, our algorithm achieves $O(1/\varepsilon)$-approximation and handles updates in $O(n^{\varepsilon})$ time. Our experiments show that our algorithm enables effective monitoring of the distributional drift in the Wasserstein distance on real and synthetic data sets, while outperforming the runtime of baseline approximations by orders of magnitudes.

Wednesday, May 14

A Bittersweet Anniversary

from Computational Complexity

The National Science Foundation was founded on May 10, 1950, 75 years ago last Saturday. No doubt the NSF has seen better days, but first let's take a look back.

At the end of World War II, Vannevar Bush, Director of the Office of Scientific Development, wrote Science, The Endless Frontier, where he laid out the importance of scientific research and the need for the US government to foster that research. 

A new agency should be established, therefore, by the Congress for the purpose. Such an agency, moreover, should be an independent agency devoted to the support of scientific research and advanced scientific education alone. Industry learned many years ago that basic research cannot often be fruitfully conducted as an adjunct to or a subdivision of an operating agency or department. Operating agencies have immediate operating goals and are under constant pressure to produce in a tangible way, for that is the test of their value. None of these conditions is favorable to basic research. Research is the exploration of the unknown and is necessarily speculative. It is inhibited by conventional approaches, traditions, and standards. It cannot be satisfactorily conducted in an atmosphere where it is gauged and tested by operating or production standards. Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations.

The report laid out the National Research Foundation that would actually spread across three agencies, DARPA, NIH, and the NSF.

While Bush didn't significantly mention computing, given the time, Computing would become a central part of NSF's mission with the establishment of the Computing and Information Science and Engineering (CISE) directorate in 1986, placing Computing at the same level as the Math and Physical Sciences Directorate and the Engineering Directorate.

In 1999, the President’s Information Technology Advisory Committee (PITAC) issued a report that led to the NSF Information Technology Research (ITR) program, which became one of the largest NSF research initiatives of the early 2000s. The report helped reframe computing not just as infrastructure but as a scientific discipline in its own right, deserving of the same kind of basic science funding as physics or biology.

CISE has led many initiatives through the years, for example the TRIPODS program established several centers devoted to the theoretical study of data science.

In recent weeks, the NSF director stepped down, hundreds of grants were canceled, new grants were put indefinitely on hold, indirect costs on new grants will be capped at 15%, and many staff members were pushed out. Divisions below the directorates are slated for elimination, advisory committees have been disbanded, and Trump's proposed budget cuts NSF’s allocation by about half. The CISE AD (Assistant to the NSF Director, or head of CISE), Greg Hager, stepped down last week and through the CRA sent a message to the community.

Under these circumstances, my ability to carry out my vision, to provide a voice for computing research, and to provide authentic leadership to the community are diminished to the point that I can have more impact outside NSF than within it. Echoing Dr. Nelson’s powerful article, leaving “allows me to speak more clearly in my own language,” and, in doing so, even more effectively amplify the work of the incredible, dedicated CISE leadership and staff who continue to strive to fulfill NSF’s mission. 

As I move beyond NSF, I will continue to make the case for computing research. Computing is central to so much in today’s world and computing advances are now core assets to the Nation’s research enterprise. NSF’s support for the past 75 years has forcefully demonstrated the value of computing research for advancing national health, prosperity and welfare; enhancing national economic competitiveness; securing the national defense and helping promote all of science and engineering. NSF-funded work has continually catalyzed new innovations, created new industries, and made us the envy of the world.  

We all need to join Greg in continuing the fight to ensure that Vannevar Bush's vision continues to survive another 75 years and beyond.

By Lance Fortnow

The National Science Foundation was founded on May 10, 1950, 75 years ago last Saturday. No doubt the NSF has seen better days, but first let's take a look back.

At the end of World War II, Vannevar Bush, Director of the Office of Scientific Development, wrote Science, The Endless Frontier, where he laid out the importance of scientific research and the need for the US government to foster that research. 

A new agency should be established, therefore, by the Congress for the purpose. Such an agency, moreover, should be an independent agency devoted to the support of scientific research and advanced scientific education alone. Industry learned many years ago that basic research cannot often be fruitfully conducted as an adjunct to or a subdivision of an operating agency or department. Operating agencies have immediate operating goals and are under constant pressure to produce in a tangible way, for that is the test of their value. None of these conditions is favorable to basic research. Research is the exploration of the unknown and is necessarily speculative. It is inhibited by conventional approaches, traditions, and standards. It cannot be satisfactorily conducted in an atmosphere where it is gauged and tested by operating or production standards. Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations.

The report laid out the National Research Foundation that would actually spread across three agencies, DARPA, NIH, and the NSF.

While Bush didn't significantly mention computing, given the time, Computing would become a central part of NSF's mission with the establishment of the Computing and Information Science and Engineering (CISE) directorate in 1986, placing Computing at the same level as the Math and Physical Sciences Directorate and the Engineering Directorate.

In 1999, the President’s Information Technology Advisory Committee (PITAC) issued a report that led to the NSF Information Technology Research (ITR) program, which became one of the largest NSF research initiatives of the early 2000s. The report helped reframe computing not just as infrastructure but as a scientific discipline in its own right, deserving of the same kind of basic science funding as physics or biology.

CISE has led many initiatives through the years, for example the TRIPODS program established several centers devoted to the theoretical study of data science.

In recent weeks, the NSF director stepped down, hundreds of grants were canceled, new grants were put indefinitely on hold, indirect costs on new grants will be capped at 15%, and many staff members were pushed out. Divisions below the directorates are slated for elimination, advisory committees have been disbanded, and Trump's proposed budget cuts NSF’s allocation by about half. The CISE AD (Assistant to the NSF Director, or head of CISE), Greg Hager, stepped down last week and through the CRA sent a message to the community.

Under these circumstances, my ability to carry out my vision, to provide a voice for computing research, and to provide authentic leadership to the community are diminished to the point that I can have more impact outside NSF than within it. Echoing Dr. Nelson’s powerful article, leaving “allows me to speak more clearly in my own language,” and, in doing so, even more effectively amplify the work of the incredible, dedicated CISE leadership and staff who continue to strive to fulfill NSF’s mission. 

As I move beyond NSF, I will continue to make the case for computing research. Computing is central to so much in today’s world and computing advances are now core assets to the Nation’s research enterprise. NSF’s support for the past 75 years has forcefully demonstrated the value of computing research for advancing national health, prosperity and welfare; enhancing national economic competitiveness; securing the national defense and helping promote all of science and engineering. NSF-funded work has continually catalyzed new innovations, created new industries, and made us the envy of the world.  

We all need to join Greg in continuing the fight to ensure that Vannevar Bush's vision continues to survive another 75 years and beyond.

By Lance Fortnow

Workshop on Predictions and Uncertainty at COLT 25

from CS Theory Events

June 30, 2025 Lyon, France at COLT 2025 vaidehi8913.github.io/predictions-and-uncertainty-colt25/ Submission deadline: May 25, 2025 Predictions from machine learning systems are increasingly being used as inputs to downstream algorithms and decision-making tasks. However, these predictions can be unreliable, and often suffer from biases or overconfidence, highlighting the need for rigorous approaches to modeling their uncertainty. The … Continue reading Workshop on Predictions and Uncertainty at COLT 25

By shacharlovett

June 30, 2025 Lyon, France at COLT 2025 https://vaidehi8913.github.io/predictions-and-uncertainty-colt25/ Submission deadline: May 25, 2025 Predictions from machine learning systems are increasingly being used as inputs to downstream algorithms and decision-making tasks. However, these predictions can be unreliable, and often suffer from biases or overconfidence, highlighting the need for rigorous approaches to modeling their uncertainty. The … Continue reading Workshop on Predictions and Uncertainty at COLT 25

By shacharlovett

Short and useful quantum proofs for sublogarithmic-space verifiers

from arXiv: Computational Complexity

Authors: A. C. Cem Say

Quantum Merlin-Arthur proof systems are believed to be stronger than both their classical counterparts and ``stand-alone'' quantum computers when Arthur is assumed to operate in $\Omega(\log n)$ space. No hint of such an advantage over classical computation had emerged from research on smaller space bounds, which had so far concentrated on constant-space verifiers. We initiate the study of quantum Merlin-Arthur systems with space bounds in $\omega(1) \cap o(\log n)$, and exhibit a problem family $\mathcal{F}$, whose yes-instances have proofs that are verifiable by polynomial-time quantum Turing machines operating in this regime. We show that no problem in $\mathcal{F}$ has proofs that can be verified classically or is solvable by a stand-alone quantum machine in polynomial time if standard complexity assumptions hold. Unlike previous examples of small-space verifiers, our protocols require only subpolynomial-length quantum proofs.

Authors: A. C. Cem Say

Quantum Merlin-Arthur proof systems are believed to be stronger than both their classical counterparts and ``stand-alone'' quantum computers when Arthur is assumed to operate in $\Omega(\log n)$ space. No hint of such an advantage over classical computation had emerged from research on smaller space bounds, which had so far concentrated on constant-space verifiers. We initiate the study of quantum Merlin-Arthur systems with space bounds in $\omega(1) \cap o(\log n)$, and exhibit a problem family $\mathcal{F}$, whose yes-instances have proofs that are verifiable by polynomial-time quantum Turing machines operating in this regime. We show that no problem in $\mathcal{F}$ has proofs that can be verified classically or is solvable by a stand-alone quantum machine in polynomial time if standard complexity assumptions hold. Unlike previous examples of small-space verifiers, our protocols require only subpolynomial-length quantum proofs.

Computing Projective Implicit Representations from Poset Towers

from arXiv: Computational Geometry

Authors: Tamal K. Dey, Florian Russold

A family of simplicial complexes, connected with simplicial maps and indexed by a poset $P$, is called a poset tower. The concept of poset towers subsumes classical objects of study in the persistence literature, as, for example, one-critical multi-filtrations and zigzag filtrations, but also allows multi-critical simplices and arbitrary simplicial maps. The homology of a poset tower gives rise to a $P$-persistence module. To compute this homology globally over $P$, in the spirit of the persistence algorithm, we consider the homology of a chain complex of $P$-persistence modules, $C_{\ell-1}\xleftarrow{}C_\ell\xleftarrow{}C_{\ell+1}$, induced by the simplices of the poset tower. Contrary to the case of one-critical filtrations, the chain-modules $C_\ell$ of a poset tower can have a complicated structure. In this work, we tackle the problem of computing a representation of such a chain complex segment by projective modules and $P$-graded matrices, which we call a projective implicit representation (PiRep). We give efficient algorithms to compute asymptotically minimal projective resolutions (up to the second term) of the chain modules and the boundary maps and compute a PiRep from these resolutions. Our algorithms are tailored to the chain complexes and resolutions coming from poset towers and take advantage of their special structure. In the context of poset towers, they are fully general and could potentially serve as a foundation for developing more efficient algorithms on specific posets.

Authors: Tamal K. Dey, Florian Russold

A family of simplicial complexes, connected with simplicial maps and indexed by a poset $P$, is called a poset tower. The concept of poset towers subsumes classical objects of study in the persistence literature, as, for example, one-critical multi-filtrations and zigzag filtrations, but also allows multi-critical simplices and arbitrary simplicial maps. The homology of a poset tower gives rise to a $P$-persistence module. To compute this homology globally over $P$, in the spirit of the persistence algorithm, we consider the homology of a chain complex of $P$-persistence modules, $C_{\ell-1}\xleftarrow{}C_\ell\xleftarrow{}C_{\ell+1}$, induced by the simplices of the poset tower. Contrary to the case of one-critical filtrations, the chain-modules $C_\ell$ of a poset tower can have a complicated structure. In this work, we tackle the problem of computing a representation of such a chain complex segment by projective modules and $P$-graded matrices, which we call a projective implicit representation (PiRep). We give efficient algorithms to compute asymptotically minimal projective resolutions (up to the second term) of the chain modules and the boundary maps and compute a PiRep from these resolutions. Our algorithms are tailored to the chain complexes and resolutions coming from poset towers and take advantage of their special structure. In the context of poset towers, they are fully general and could potentially serve as a foundation for developing more efficient algorithms on specific posets.

Claycode: Stylable and Deformable 2D Scannable Codes

from arXiv: Computational Geometry

Authors: Marco Maida, Alberto Crescini, Marco Perronet, Elena Camuffo

This paper introduces Claycode, a novel 2D scannable code designed for extensive stylization and deformation. Unlike traditional matrix-based codes (e.g., QR codes), Claycodes encode their message in a tree structure. During the encoding process, bits are mapped into a topology tree, which is then depicted as a nesting of color regions drawn within the boundaries of a target polygon shape. When decoding, Claycodes are extracted and interpreted in real-time from a camera stream. We detail the end-to-end pipeline and show that Claycodes allow for extensive stylization without compromising their functionality. We then empirically demonstrate Claycode's high tolerance to heavy deformations, outperforming traditional 2D scannable codes in scenarios where they typically fail.

Authors: Marco Maida, Alberto Crescini, Marco Perronet, Elena Camuffo

This paper introduces Claycode, a novel 2D scannable code designed for extensive stylization and deformation. Unlike traditional matrix-based codes (e.g., QR codes), Claycodes encode their message in a tree structure. During the encoding process, bits are mapped into a topology tree, which is then depicted as a nesting of color regions drawn within the boundaries of a target polygon shape. When decoding, Claycodes are extracted and interpreted in real-time from a camera stream. We detail the end-to-end pipeline and show that Claycodes allow for extensive stylization without compromising their functionality. We then empirically demonstrate Claycode's high tolerance to heavy deformations, outperforming traditional 2D scannable codes in scenarios where they typically fail.

Implicit Toolpath Generation for Functionally Graded Additive Manufacturing via Gradient-Aware Slicing

from arXiv: Computational Geometry

Authors: Charles Wade, Devon Beck, Robert MacCurdy

This paper presents a novel gradient-aware slicing method for functionally graded additive manufacturing (FGM) that overcomes the limitations of conventional toolpath planning approaches, which struggle to produce truly continuous gradients. By integrating multi-material gradients into the toolpath generation process, our method enables the fabrication of FGMs with complex gradients that vary seamlessly along all three axes. We leverage OpenVCAD's implicit representation of geometry and material fields to directly extract iso-contours, enabling accurate, controlled gradient toolpaths. Two novel strategies are introduced to integrate these gradients into the toolpath planning process. The first strategy maintains traditional perimeter, skin, and infill structures subdivided by mixture ratios, with automated 'zippering' to mitigate stress concentrations. The second strategy fills iso-contoured regions densely, printing directly against gradients to eliminate purging and reduce waste. Both strategies accommodate gradually changing printing parameters, such as mixed filament ratios, toolhead switching, and variable nozzle temperatures for foaming materials. This capability allows for controlled variation of composition, density, and other properties within a single build, expanding the design space for functionally graded parts. Experimental results demonstrate the fabrication of high-quality FGMs with complex, multi-axis gradients, highlighting the versatility of our method. We showcase the successful implementation of both strategies on a range of geometries and material combinations, demonstrating the potential of our approach to produce intricate and functional FGMs. This work provides a robust, open-source, and automated framework for designing and fabricating advanced FGMs, accelerating research in multi-material additive manufacturing.

Authors: Charles Wade, Devon Beck, Robert MacCurdy

This paper presents a novel gradient-aware slicing method for functionally graded additive manufacturing (FGM) that overcomes the limitations of conventional toolpath planning approaches, which struggle to produce truly continuous gradients. By integrating multi-material gradients into the toolpath generation process, our method enables the fabrication of FGMs with complex gradients that vary seamlessly along all three axes. We leverage OpenVCAD's implicit representation of geometry and material fields to directly extract iso-contours, enabling accurate, controlled gradient toolpaths. Two novel strategies are introduced to integrate these gradients into the toolpath planning process. The first strategy maintains traditional perimeter, skin, and infill structures subdivided by mixture ratios, with automated 'zippering' to mitigate stress concentrations. The second strategy fills iso-contoured regions densely, printing directly against gradients to eliminate purging and reduce waste. Both strategies accommodate gradually changing printing parameters, such as mixed filament ratios, toolhead switching, and variable nozzle temperatures for foaming materials. This capability allows for controlled variation of composition, density, and other properties within a single build, expanding the design space for functionally graded parts. Experimental results demonstrate the fabrication of high-quality FGMs with complex, multi-axis gradients, highlighting the versatility of our method. We showcase the successful implementation of both strategies on a range of geometries and material combinations, demonstrating the potential of our approach to produce intricate and functional FGMs. This work provides a robust, open-source, and automated framework for designing and fabricating advanced FGMs, accelerating research in multi-material additive manufacturing.

Uniform Universal Sets, Splitters, and Bisectors

from arXiv: Data Structures and Algorithms

Authors: Elisabet Burjons, Peter Rossmanith

Given a subset of size $k$ of a very large universe a randomized way to find this subset could consist of deleting half of the universe and then searching the remaining part. With a probability of $2^{-k}$ one will succeed. By probability amplification, a randomized algorithm needs about $2^k$ rounds until it succeeds. We construct bisectors that derandomize this process and have size~$2^{k+o(k)}$. One application is derandomization of reductions between average case complexity classes. We also construct uniform $(n,k)$-universal sets that generalize universal sets in such a way that they are bisectors at the same time. This construction needs only linear time and produces families of asymptotically optimal size without using advanced combinatorial constructions as subroutines, which previous families did, but are basedmainly on modulo functions and refined brute force search.

Authors: Elisabet Burjons, Peter Rossmanith

Given a subset of size $k$ of a very large universe a randomized way to find this subset could consist of deleting half of the universe and then searching the remaining part. With a probability of $2^{-k}$ one will succeed. By probability amplification, a randomized algorithm needs about $2^k$ rounds until it succeeds. We construct bisectors that derandomize this process and have size~$2^{k+o(k)}$. One application is derandomization of reductions between average case complexity classes. We also construct uniform $(n,k)$-universal sets that generalize universal sets in such a way that they are bisectors at the same time. This construction needs only linear time and produces families of asymptotically optimal size without using advanced combinatorial constructions as subroutines, which previous families did, but are basedmainly on modulo functions and refined brute force search.

Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation

from arXiv: Data Structures and Algorithms

Authors: Ninh Pham, Rasmus Pagh

Approximation of non-linear kernels using random feature maps has become a powerful technique for scaling kernel methods to large datasets. We propose \textit{Tensor Sketch}, an efficient random feature map for approximating polynomial kernels. Given $n$ training samples in $\R^d$ Tensor Sketch computes low-dimensional embeddings in $\R^D$ in time $\BO{n(d+D \log{D})}$ making it well-suited for high-dimensional and large-scale settings. We provide theoretical guarantees on the approximation error, ensuring the fidelity of the resulting kernel function estimates. We also discuss extensions and highlight applications where Tensor Sketch serves as a central computational tool.

Authors: Ninh Pham, Rasmus Pagh

Approximation of non-linear kernels using random feature maps has become a powerful technique for scaling kernel methods to large datasets. We propose \textit{Tensor Sketch}, an efficient random feature map for approximating polynomial kernels. Given $n$ training samples in $\R^d$ Tensor Sketch computes low-dimensional embeddings in $\R^D$ in time $\BO{n(d+D \log{D})}$ making it well-suited for high-dimensional and large-scale settings. We provide theoretical guarantees on the approximation error, ensuring the fidelity of the resulting kernel function estimates. We also discuss extensions and highlight applications where Tensor Sketch serves as a central computational tool.

Reconfiguration of List Colourings

from arXiv: Data Structures and Algorithms

Authors: Stijn Cambie, Wouter Cames van Batenburg, Daniel W. Cranston, Jan van den Heuvel, Ross J. Kang

Given a proper (list) colouring of a graph $G$, a recolouring step changes the colour at a single vertex to another colour (in its list) that is currently unused on its neighbours, hence maintaining a proper colouring. Suppose that each vertex $v$ has its own private list $L(v)$ of allowed colours such that $|L(v)|\ge \mbox{deg}(v)+1$. We prove that if $G$ is connected and its maximum degree $\Delta$ is at least $3$, then for any two proper $L$-colourings in which at least one vertex can be recoloured, one can be transformed to the other by a sequence of $O(|V(G)|^2)$ recolouring steps. We also show that reducing the list-size of a single vertex $w$ to $\mbox{deg}(w)$ can lead to situations where the space of proper $L$-colourings is `shattered'. Our results can be interpreted as showing a sharp phase transition in the Glauber dynamics of proper $L$-colourings of graphs. This constitutes a `local' strengthening and generalisation of a result of Feghali, Johnson, and Paulusma, which considered the situation where the lists are all identical to $\{1,\ldots,\Delta+1\}$.

Authors: Stijn Cambie, Wouter Cames van Batenburg, Daniel W. Cranston, Jan van den Heuvel, Ross J. Kang

Given a proper (list) colouring of a graph $G$, a recolouring step changes the colour at a single vertex to another colour (in its list) that is currently unused on its neighbours, hence maintaining a proper colouring. Suppose that each vertex $v$ has its own private list $L(v)$ of allowed colours such that $|L(v)|\ge \mbox{deg}(v)+1$. We prove that if $G$ is connected and its maximum degree $\Delta$ is at least $3$, then for any two proper $L$-colourings in which at least one vertex can be recoloured, one can be transformed to the other by a sequence of $O(|V(G)|^2)$ recolouring steps. We also show that reducing the list-size of a single vertex $w$ to $\mbox{deg}(w)$ can lead to situations where the space of proper $L$-colourings is `shattered'. Our results can be interpreted as showing a sharp phase transition in the Glauber dynamics of proper $L$-colourings of graphs. This constitutes a `local' strengthening and generalisation of a result of Feghali, Johnson, and Paulusma, which considered the situation where the lists are all identical to $\{1,\ldots,\Delta+1\}$.

On Unbiased Low-Rank Approximation with Minimum Distortion

from arXiv: Data Structures and Algorithms

Authors: Leighton Pate Barnes, Stephen Cameron, Benjamin Howard

We describe an algorithm for sampling a low-rank random matrix $Q$ that best approximates a fixed target matrix $P\in\mathbb{C}^{n\times m}$ in the following sense: $Q$ is unbiased, i.e., $\mathbb{E}[Q] = P$; $\mathsf{rank}(Q)\leq r$; and $Q$ minimizes the expected Frobenius norm error $\mathbb{E}\|P-Q\|_F^2$. Our algorithm mirrors the solution to the efficient unbiased sparsification problem for vectors, except applied to the singular components of the matrix $P$. Optimality is proven by showing that our algorithm matches the error from an existing lower bound.

Authors: Leighton Pate Barnes, Stephen Cameron, Benjamin Howard

We describe an algorithm for sampling a low-rank random matrix $Q$ that best approximates a fixed target matrix $P\in\mathbb{C}^{n\times m}$ in the following sense: $Q$ is unbiased, i.e., $\mathbb{E}[Q] = P$; $\mathsf{rank}(Q)\leq r$; and $Q$ minimizes the expected Frobenius norm error $\mathbb{E}\|P-Q\|_F^2$. Our algorithm mirrors the solution to the efficient unbiased sparsification problem for vectors, except applied to the singular components of the matrix $P$. Optimality is proven by showing that our algorithm matches the error from an existing lower bound.

Tuesday, May 13

Postdoctoral Associate at Virginia Tech (apply by June 21, 2025)

from CCI: jobs

The Computer Science Department at Virginia Tech invites applications for a postdoctoral position with Ali Vakilian, focused on the algorithmic foundations of data science and machine learning. Research areas include sublinear and streaming algorithms, learning-augmented algorithms, and learning in strategic settings. The one-year position starts in Sept 2025, with possible extension. Website: careers.pageuppeople.com/968/cw/en-us/job/533141/postdoctoral-associate Email: vakilian@vt.edu

The Computer Science Department at Virginia Tech invites applications for a postdoctoral position with Ali Vakilian, focused on the algorithmic foundations of data science and machine learning. Research areas include sublinear and streaming algorithms, learning-augmented algorithms, and learning in strategic settings. The one-year position starts in Sept 2025, with possible extension.

Website: https://careers.pageuppeople.com/968/cw/en-us/job/533141/postdoctoral-associate
Email: vakilian@vt.edu

By shacharlovett

TR25-062 | Lower Bounds from Succinct Hitting Sets | Prerona Chatterjee, Anamay Tengse

from ECCC Papers

We investigate the consequences of the existence of ``efficiently describable'' hitting sets for polynomial sized algebraic circuit (VP), in particular, \emph{VP-succinct hitting sets}. Existence of such hitting sets is known to be equivalent to a ``natural-proofs-barrier'' towards algebraic circuit lower bounds, from the works that introduced this concept (Forbes \etal (2018), Grochow \etal (2017)). We show that the existence of VP-succinct hitting sets for VP would either imply that VP \neq VNP, or yield a fairly strong lower bound against TC^0 circuits, assuming the Generalized Riemann Hypothesis (GRH). This result is a consequence of showing that designing efficiently describable (VP-explicit) hitting set generators for a class $\mathcal{C}$, is essentially the same as proving a separation between $\mathcal{C}$ and VPSPACE: the algebraic analogue of PSPACE. More formally, we prove an upper bound on \emph{equations} for polynomial sized algebraic circuits(VP), in terms of VPSPACE. Using the same upper bound, we also show that even \emph{sub-polynomially explicit hitting sets} for --- much weaker than VP-succinct hitting sets that are almost polylog-explicit --- would imply that either VP \neq VNP or that P \neq PSPACE. This motivates us to define the concept of \emph{cryptographic hitting sets}, which we believe is interesting on its own.

We investigate the consequences of the existence of ``efficiently describable'' hitting sets for polynomial sized algebraic circuit (VP), in particular, \emph{VP-succinct hitting sets}. Existence of such hitting sets is known to be equivalent to a ``natural-proofs-barrier'' towards algebraic circuit lower bounds, from the works that introduced this concept (Forbes \etal (2018), Grochow \etal (2017)). We show that the existence of VP-succinct hitting sets for VP would either imply that VP \neq VNP, or yield a fairly strong lower bound against TC^0 circuits, assuming the Generalized Riemann Hypothesis (GRH). This result is a consequence of showing that designing efficiently describable (VP-explicit) hitting set generators for a class $\mathcal{C}$, is essentially the same as proving a separation between $\mathcal{C}$ and VPSPACE: the algebraic analogue of PSPACE. More formally, we prove an upper bound on \emph{equations} for polynomial sized algebraic circuits(VP), in terms of VPSPACE. Using the same upper bound, we also show that even \emph{sub-polynomially explicit hitting sets} for --- much weaker than VP-succinct hitting sets that are almost polylog-explicit --- would imply that either VP \neq VNP or that P \neq PSPACE. This motivates us to define the concept of \emph{cryptographic hitting sets}, which we believe is interesting on its own.

Workshop on the Theory of AI for Scientific Computing

from CS Theory Events

June 30, 2025 Lyon, France tasc-workshop.github.io Submission deadline: May 23, 2025 Registration deadline: May 23, 2025 The Theory of AI for Scientific Computing (TASC) workshop will be held as part of COLT 2025 and aims to advance the theoretical understanding of AI-augmented scientific computing, including capabilities and limitations of recent methods, principled architectures, unique data-generation … Continue reading Workshop on the Theory of AI for Scientific Computing

By shacharlovett

June 30, 2025 Lyon, France https://tasc-workshop.github.io Submission deadline: May 23, 2025 Registration deadline: May 23, 2025 The Theory of AI for Scientific Computing (TASC) workshop will be held as part of COLT 2025 and aims to advance the theoretical understanding of AI-augmented scientific computing, including capabilities and limitations of recent methods, principled architectures, unique data-generation … Continue reading Workshop on the Theory of AI for Scientific Computing

By shacharlovett

May you live in boring times

from Ben Recht

Louis Fein’s utopian vision of academic computer science

I can’t get enough of the computer-centric idealistic futurism from the 50s and 60s. The dreams and fears are exactly the same as those of today's technooptimists. Don't get me wrong, we've made tons of progress since 1960. But I’m endlessly fascinated by how we tell the same stories: Promises of superintelligent machines and self-driving cars within the decade. Solemn discussions of the inevitable job loss from labor automation.

Having found a new obsession with this Louis Fein character, I was compelled to read his 1961 paper “The Computer-Related Sciences (Synnoetics) at A University in The Year 1975.” In sharp contrast to his big-dreaming contemporaries, Fein’s lofty speculative vision imagines new academic bureaucracy. It’s delightfully boring!

Published in The American Scientist, this article is Fein’s ACM pitch written to inform a broader audience of the tremendous potential of computer science departments. With such outreach in mind, Fein asks us to imagine that his paper is the address to distinguished alumni, 15 years in the future. The speaker is the University President, an accomplished historian with impeccable bona fides. In his address, the president will be waxing poetic about his wise decision to establish a new department of Synnoetics.

I feel for the poor imaginary people who signed up to suffer through that talk. They come to campus to see old friends and watch the football game, and have to suffer through a soporific lecture on finances. The development office should have gently informed the president that this wouldn’t help meet the year’s fundraising targets.

Despite its boringness, two themes that weren’t in Fein’s ACM report are worth discussing. First is his notion of synnoetics. Second is a bizarre fictional tale of a battle with the head of the campus computing center.

Synnoetics is Fein’s attempt to make a more coherent definition of the field of computer sciences. He now calls his original vision—the long laundry list of topics from the ACM article— “computing-related fields.” Starting a department by just putting a bunch of people in a room together because it makes sense financially is a hard sell on campus. You need to articulate a real academic vision.

“The reason that we were not satisfied with the former name Computer Related Sciences, was that the appearance of the word "computer" was misleading; although we were acutely aware of the public relations value of this word. People ignored the qualifying word "related" and associated the name exclusively with the computer-sciences—i.e., with the design, programming, and applications of computers, which is now only a small part of the number and variety of subjects we include in Synnoetics. The other names variously used, "Cybernetics," "Information Sciences," "Communication Sciences," etc., had similarly restricted connotations.”

Fein equates Cybernetics with “theory and practice of control and communication,” which suggests he read the title of Wiener’s book, but probably not much beyond that. His description of synnoetics sounds an awful lot like cybernetics to me. For Fein, synnoetics concerns any problem of collective decision making. Such decisions can be made solely by machines, by machines and people, by people cooperating together, or even by people and other organisms. Synnoetics studies all collective enhancement of mental power. It is the study of how to solve diverse problems to attain diverse goals—an augmented, collective, instrumental rationality.

Syn is a latinized form of a Greek prefix meaning "together"; noetics is derived from the Greek, meaning "pertaining to the mind or intellect."

To illustrate the power of synnoetics, Fein’s narrator tells the tale of “the famous strike of 1970,” which was settled by one of the faculty in the synnoetics department. The professor used a computer as an automated mediator. Instead of negotiating over a piecemeal settlement of concessions and demands, both sides argued for what constraints would make them happy. This was posed as a large-scale mathematical optimization problem rapidly solved by the computer. While MIT professors were promising machines that would think and talk, Fein was imagining faster solutions of market design problems.

Though Fein’s department had relatively modest ambitions compared to his fellow 1960s futurists, Fein’s synnoetics department is an academic utopia. It is staffed by the best researchers from academia, industry, and government, with degrees in “engineering, mathematics, linguistics, political science, psychology, physics, biology, neurophysiology, and economics,” and one of the most distinguished members is “also a poet and philosopher.” The department has broken free from the publish or perish culture that plagues other fields.

“Some of our best teachers just teach and do not write. We are pleased to have them with us, as we are pleased to have those who do research and write and do not teach. Most of the faculty is engaged in some research and some teaching. I must admit that, in the past, we insisted that our faculty publish—or else! But that was in the past. As I indicated, promotions are based on the excellence of a man as a teacher or scholar.”

In an unfortunate speculative history, Fein’s narrator describes how things weren’t always so perfect. Establishing the department met heavy resistance from foolish academic conservatives. Fein describes the resistance to founding the department by the director of the campus computing center.

The costs of maintaining the bulky mainframe made the director, an excellent researcher, into a poor salesman. The computing center paid for itself by charging campus researchers for machine time. You know, the AWS model. Though the director was always looking out for the bottom line of his center, he found himself with an enjoyable multidisciplinary academic role. Those who paid for computer time had interesting problems, and he saw common themes in the diverse applications. People in the center started to pursue standard solutions as basic academic research. They published papers related to the tooling itself. Despite vocal campus resistance to the notion that computing could be a field, they demanded classes for training so their students could maximally leverage the center for research.

Thus, the center started to generate papers and teach students ad hoc. The director thought this ad hoc assembly of research and training was fine. It kept the computer humming and avoided the heated academic fights for a department. However, since he focused on maintaining the center, he had a myopia where he only wanted to research computer-oriented topics. Otherwise, how could they generate more revenue for his center?

Fortunately, others on campus took matters into their own hands and wrested control of computing away from the director. They established the department with a grand vision, and the rest was a glorious history. But what happened to the director? Fein concludes,

“Before closing, I would like to acknowledge the invaluable assistance given me in the preparation of this talk by our former Computing Center Director. He is now a Professor in the Engineering Department where he spends most of his time doing research on electronic components for Synnoetic Systems.”

This dig at the end of the article is too real. This director was clearly based on a real person. In a 1979 interview, Fein names names. In describing those most vocally opposed to a computer science department, he states:

“I didn't have any contact with people outside; the people outside weren't the resisters, it was the people inside who were the resisters. George Forsythe and [John] Herriot and Ed Feigenbaum. These people were the resisters.”

The same George Forsythe! Indeed, for those who might know, here’s the first paragraph from the history of the Stanford computer science department:

“Mathematics Professor George Forsythe, Provost Fred Terman, and Associate Provost Albert Bowker conceived of a scientific “escalation” of computing at Stanford from the Computer Center function to an academic teaching and research function. George, together with mathematics professor Jack Herriot (deceased 2003), founded the Division of Computer Science within the Mathematics Department in 1961.”

The resisters became the heroes, and Fein was mostly forgotten. It’s one of the many ways Fein’s speculative future did not become our history.

Subscribe now

By Ben Recht

Quon Classical Simulation: Unifying Clifford, Matchgates and Entanglement

from arXiv: Computational Complexity

Authors: Zixuan Feng, Zhengwei Liu, Fan Lu, Ningfeng Wang

We propose a unified classical simulation framework for quantum circuits, termed Quon Classical Simulation (QCS), built upon the diagrammatic formalism of the Quon language. Central to this framework is the introduction of magic holes, a topological feature that captures the global source of computational hardness in simulating quantum systems. Unlike conventional measures, the complexity of QCS is governed by the topological entanglement entropy associated with these magic holes. We show that Clifford circuits and Matchgate circuits are free of magic holes and thus efficiently simulable within our model. To capture the interaction structure of magic holes, we define a topological tensor network representation and develop novel skein relations and reduction algorithms to simplify circuit representations. This approach significantly improves the efficiency of classical simulations and provides a unified explanation for the tractability of various known quantum circuit classes. Our work offers a new topological perspective on the classical simulability of quantum systems and topological complexity.

Authors: Zixuan Feng, Zhengwei Liu, Fan Lu, Ningfeng Wang

We propose a unified classical simulation framework for quantum circuits, termed Quon Classical Simulation (QCS), built upon the diagrammatic formalism of the Quon language. Central to this framework is the introduction of magic holes, a topological feature that captures the global source of computational hardness in simulating quantum systems. Unlike conventional measures, the complexity of QCS is governed by the topological entanglement entropy associated with these magic holes. We show that Clifford circuits and Matchgate circuits are free of magic holes and thus efficiently simulable within our model. To capture the interaction structure of magic holes, we define a topological tensor network representation and develop novel skein relations and reduction algorithms to simplify circuit representations. This approach significantly improves the efficiency of classical simulations and provides a unified explanation for the tractability of various known quantum circuit classes. Our work offers a new topological perspective on the classical simulability of quantum systems and topological complexity.

Cluster Vertex Deletion Problems on Cubic Graphs

from arXiv: Computational Complexity

Authors: Irena Rusu

The problems Cluster Vertex Deletion (or Cluster-VD) and its generalization s-Club Cluster Vertex Deletion (or s-Club-VD, for any integer s>= 1), have been introduced with the aim of detecting highly-connected parts in complex systems. Their NP-completeness has been established for several classes of graphs, but remains open for smaller classes, including subcubic planar bipartite graphs and cubic graphs. In this paper, we show that Cluster-VD and more generally s-Club-VD are NP-complete for cubic planar bipartite graphs. We also deduce new results for the related k-Path Vertex Cover problem (or k-PVC), namely 3-PVC is NP-complete for cubic planar bipartite graphs, whereas k-PVC with k>= 4 is NP-complete for subcubic planar (and bipartite, when k is odd) graphs of arbitrarily large girth.

Authors: Irena Rusu

The problems Cluster Vertex Deletion (or Cluster-VD) and its generalization s-Club Cluster Vertex Deletion (or s-Club-VD, for any integer s>= 1), have been introduced with the aim of detecting highly-connected parts in complex systems. Their NP-completeness has been established for several classes of graphs, but remains open for smaller classes, including subcubic planar bipartite graphs and cubic graphs. In this paper, we show that Cluster-VD and more generally s-Club-VD are NP-complete for cubic planar bipartite graphs. We also deduce new results for the related k-Path Vertex Cover problem (or k-PVC), namely 3-PVC is NP-complete for cubic planar bipartite graphs, whereas k-PVC with k>= 4 is NP-complete for subcubic planar (and bipartite, when k is odd) graphs of arbitrarily large girth.

Undecidability of Polynomial Inequalities in Subset Densities and Additive Energies

from arXiv: Computational Complexity

Authors: Yaqiao Li

Many results in extremal graph theory can be formulated as certain polynomial inequalities in graph homomorphism densities. Answering fundamental questions raised by Lov{\'a}sz, Szegedy and Razborov, Hatami and Norine proved that determining the validity of an arbitrary such polynomial inequality in graph homomorphism densities is undecidable. We observe that many results in additive combinatorics can also be formulated as polynomial inequalities in subset's density and its variants. Based on techniques introduced in Hatami and Norine, together with algebraic and graph construction and Fourier analysis, we prove similarly two theorems of undecidability, thus showing that establishing such polynomial inequalities in additive combinatorics are inherently difficult in their full generality.

Authors: Yaqiao Li

Many results in extremal graph theory can be formulated as certain polynomial inequalities in graph homomorphism densities. Answering fundamental questions raised by Lov{\'a}sz, Szegedy and Razborov, Hatami and Norine proved that determining the validity of an arbitrary such polynomial inequality in graph homomorphism densities is undecidable. We observe that many results in additive combinatorics can also be formulated as polynomial inequalities in subset's density and its variants. Based on techniques introduced in Hatami and Norine, together with algebraic and graph construction and Fourier analysis, we prove similarly two theorems of undecidability, thus showing that establishing such polynomial inequalities in additive combinatorics are inherently difficult in their full generality.

Value Iteration with Guessing for Markov Chains and Markov Decision Processes

from arXiv: Computational Complexity

Authors: Krishnendu Chatterjee, Mahdi JafariRaviz, Raimundo Saona, Jakub Svoboda

Two standard models for probabilistic systems are Markov chains (MCs) and Markov decision processes (MDPs). Classic objectives for such probabilistic models for control and planning problems are reachability and stochastic shortest path. The widely studied algorithmic approach for these problems is the Value Iteration (VI) algorithm which iteratively applies local updates called Bellman updates. There are many practical approaches for VI in the literature but they all require exponentially many Bellman updates for MCs in the worst case. A preprocessing step is an algorithm that is discrete, graph-theoretical, and requires linear space. An important open question is whether, after a polynomial-time preprocessing, VI can be achieved with sub-exponentially many Bellman updates. In this work, we present a new approach for VI based on guessing values. Our theoretical contributions are twofold. First, for MCs, we present an almost-linear-time preprocessing algorithm after which, along with guessing values, VI requires only subexponentially many Bellman updates. Second, we present an improved analysis of the speed of convergence of VI for MDPs. Finally, we present a practical algorithm for MDPs based on our new approach. Experimental results show that our approach provides a considerable improvement over existing VI-based approaches on several benchmark examples from the literature.

Authors: Krishnendu Chatterjee, Mahdi JafariRaviz, Raimundo Saona, Jakub Svoboda

Two standard models for probabilistic systems are Markov chains (MCs) and Markov decision processes (MDPs). Classic objectives for such probabilistic models for control and planning problems are reachability and stochastic shortest path. The widely studied algorithmic approach for these problems is the Value Iteration (VI) algorithm which iteratively applies local updates called Bellman updates. There are many practical approaches for VI in the literature but they all require exponentially many Bellman updates for MCs in the worst case. A preprocessing step is an algorithm that is discrete, graph-theoretical, and requires linear space. An important open question is whether, after a polynomial-time preprocessing, VI can be achieved with sub-exponentially many Bellman updates. In this work, we present a new approach for VI based on guessing values. Our theoretical contributions are twofold. First, for MCs, we present an almost-linear-time preprocessing algorithm after which, along with guessing values, VI requires only subexponentially many Bellman updates. Second, we present an improved analysis of the speed of convergence of VI for MDPs. Finally, we present a practical algorithm for MDPs based on our new approach. Experimental results show that our approach provides a considerable improvement over existing VI-based approaches on several benchmark examples from the literature.

On Finding Randomly Planted Cliques in Arbitrary Graphs

from arXiv: Computational Complexity

Authors: Francesco Agrimonti, Marco Bressan, Tommaso d'Orsi

We study a planted clique model introduced by Feige where a complete graph of size $c\cdot n$ is planted uniformly at random in an arbitrary $n$-vertex graph. We give a simple deterministic algorithm that, in almost linear time, recovers a clique of size $(c/3)^{O(1/c)} \cdot n$ as long as the original graph has maximum degree at most $(1-p)n$ for some fixed $p>0$. The proof hinges on showing that the degrees of the final graph are correlated with the planted clique, in a way similar to (but more intricate than) the classical $G(n,\frac{1}{2})+K_{\sqrt{n}}$ planted clique model. Our algorithm suggests a separation from the worst-case model, where, assuming the Unique Games Conjecture, no polynomial algorithm can find cliques of size $\Omega(n)$ for every fixed $c>0$, even if the input graph has maximum degree $(1-p)n$. Our techniques extend beyond the planted clique model. For example, when the planted graph is a balanced biclique, we recover a balanced biclique of size larger than the best guarantees known for the worst case.

Authors: Francesco Agrimonti, Marco Bressan, Tommaso d'Orsi

We study a planted clique model introduced by Feige where a complete graph of size $c\cdot n$ is planted uniformly at random in an arbitrary $n$-vertex graph. We give a simple deterministic algorithm that, in almost linear time, recovers a clique of size $(c/3)^{O(1/c)} \cdot n$ as long as the original graph has maximum degree at most $(1-p)n$ for some fixed $p>0$. The proof hinges on showing that the degrees of the final graph are correlated with the planted clique, in a way similar to (but more intricate than) the classical $G(n,\frac{1}{2})+K_{\sqrt{n}}$ planted clique model. Our algorithm suggests a separation from the worst-case model, where, assuming the Unique Games Conjecture, no polynomial algorithm can find cliques of size $\Omega(n)$ for every fixed $c>0$, even if the input graph has maximum degree $(1-p)n$. Our techniques extend beyond the planted clique model. For example, when the planted graph is a balanced biclique, we recover a balanced biclique of size larger than the best guarantees known for the worst case.

Battle Sheep is PSPACE-complete

from arXiv: Computational Complexity

Authors: Kyle Burke, Hirotaka Ono

Battle Sheep is a board game published by Blue Orange Games. With two players, it is a combinatorial game that uses normal play rules. We show that it is PSPACE-complete, even when each stack has only up to 3 tokens.

Authors: Kyle Burke, Hirotaka Ono

Battle Sheep is a board game published by Blue Orange Games. With two players, it is a combinatorial game that uses normal play rules. We show that it is PSPACE-complete, even when each stack has only up to 3 tokens.

Safety Analysis in the NGAC Model

from arXiv: Computational Complexity

Authors: Brian Tan, Ewan S. D. Davies, Indrakshi Ray, Mahmoud A. Abdelgawad

We study the safety problem for the next-generation access control (NGAC) model. We show that under mild assumptions it is coNP-complete, and under further realistic assumptions we give an algorithm for the safety problem that significantly outperforms naive brute force search. We also show that real-world examples of mutually exclusive attributes lead to nearly worst-case behavior of our algorithm.

Authors: Brian Tan, Ewan S. D. Davies, Indrakshi Ray, Mahmoud A. Abdelgawad

We study the safety problem for the next-generation access control (NGAC) model. We show that under mild assumptions it is coNP-complete, and under further realistic assumptions we give an algorithm for the safety problem that significantly outperforms naive brute force search. We also show that real-world examples of mutually exclusive attributes lead to nearly worst-case behavior of our algorithm.

All Polyhedral Manifolds are Connected by a 2-Step Refolding

from arXiv: Computational Geometry

Authors: Lily Chung, Erik D. Demaine, Jenny Diomidova, Tonan Kamata, Jayson Lynch, Ryuhei Uehara, Hanyu Alice Zhang

We prove that, for any two polyhedral manifolds $\mathcal P, \mathcal Q$, there is a polyhedral manifold $\mathcal I$ such that $\mathcal P, \mathcal I$ share a common unfolding and $\mathcal I,\mathcal Q$ share a common unfolding. In other words, we can unfold $\mathcal P$, refold (glue) that unfolding into $\mathcal I$, unfold $\mathcal I$, and then refold into $\mathcal Q$. Furthermore, if $\mathcal P, \mathcal Q$ have no boundary and can be embedded in 3D (without self-intersection), then so does $\mathcal I$. These results generalize to $n$ given manifolds $\mathcal P_1, \mathcal P_2, \dots, \mathcal P_n$; they all have a common unfolding with the same intermediate manifold $\mathcal I$. Allowing more than two unfold/refold steps, we obtain stronger results for two special cases: for doubly covered convex planar polygons, we achieve that all intermediate polyhedra are planar; and for tree-shaped polycubes, we achieve that all intermediate polyhedra are tree-shaped polycubes.

Authors: Lily Chung, Erik D. Demaine, Jenny Diomidova, Tonan Kamata, Jayson Lynch, Ryuhei Uehara, Hanyu Alice Zhang

We prove that, for any two polyhedral manifolds $\mathcal P, \mathcal Q$, there is a polyhedral manifold $\mathcal I$ such that $\mathcal P, \mathcal I$ share a common unfolding and $\mathcal I,\mathcal Q$ share a common unfolding. In other words, we can unfold $\mathcal P$, refold (glue) that unfolding into $\mathcal I$, unfold $\mathcal I$, and then refold into $\mathcal Q$. Furthermore, if $\mathcal P, \mathcal Q$ have no boundary and can be embedded in 3D (without self-intersection), then so does $\mathcal I$. These results generalize to $n$ given manifolds $\mathcal P_1, \mathcal P_2, \dots, \mathcal P_n$; they all have a common unfolding with the same intermediate manifold $\mathcal I$. Allowing more than two unfold/refold steps, we obtain stronger results for two special cases: for doubly covered convex planar polygons, we achieve that all intermediate polyhedra are planar; and for tree-shaped polycubes, we achieve that all intermediate polyhedra are tree-shaped polycubes.

Hand-Shadow Poser

from arXiv: Computational Geometry

Authors: Hao Xu, Yinqiao Wang, Niloy J. Mitra, Shuaicheng Liu, Pheng-Ann Heng, Chi-Wing Fu

Hand shadow art is a captivating art form, creatively using hand shadows to reproduce expressive shapes on the wall. In this work, we study an inverse problem: given a target shape, find the poses of left and right hands that together best produce a shadow resembling the input. This problem is nontrivial, since the design space of 3D hand poses is huge while being restrictive due to anatomical constraints. Also, we need to attend to the input's shape and crucial features, though the input is colorless and textureless. To meet these challenges, we design Hand-Shadow Poser, a three-stage pipeline, to decouple the anatomical constraints (by hand) and semantic constraints (by shadow shape): (i) a generative hand assignment module to explore diverse but reasonable left/right-hand shape hypotheses; (ii) a generalized hand-shadow alignment module to infer coarse hand poses with a similarity-driven strategy for selecting hypotheses; and (iii) a shadow-feature-aware refinement module to optimize the hand poses for physical plausibility and shadow feature preservation. Further, we design our pipeline to be trainable on generic public hand data, thus avoiding the need for any specialized training dataset. For method validation, we build a benchmark of 210 diverse shadow shapes of varying complexity and a comprehensive set of metrics, including a novel DINOv2-based evaluation metric. Through extensive comparisons with multiple baselines and user studies, our approach is demonstrated to effectively generate bimanual hand poses for a large variety of hand shapes for over 85% of the benchmark cases.

Authors: Hao Xu, Yinqiao Wang, Niloy J. Mitra, Shuaicheng Liu, Pheng-Ann Heng, Chi-Wing Fu

Hand shadow art is a captivating art form, creatively using hand shadows to reproduce expressive shapes on the wall. In this work, we study an inverse problem: given a target shape, find the poses of left and right hands that together best produce a shadow resembling the input. This problem is nontrivial, since the design space of 3D hand poses is huge while being restrictive due to anatomical constraints. Also, we need to attend to the input's shape and crucial features, though the input is colorless and textureless. To meet these challenges, we design Hand-Shadow Poser, a three-stage pipeline, to decouple the anatomical constraints (by hand) and semantic constraints (by shadow shape): (i) a generative hand assignment module to explore diverse but reasonable left/right-hand shape hypotheses; (ii) a generalized hand-shadow alignment module to infer coarse hand poses with a similarity-driven strategy for selecting hypotheses; and (iii) a shadow-feature-aware refinement module to optimize the hand poses for physical plausibility and shadow feature preservation. Further, we design our pipeline to be trainable on generic public hand data, thus avoiding the need for any specialized training dataset. For method validation, we build a benchmark of 210 diverse shadow shapes of varying complexity and a comprehensive set of metrics, including a novel DINOv2-based evaluation metric. Through extensive comparisons with multiple baselines and user studies, our approach is demonstrated to effectively generate bimanual hand poses for a large variety of hand shapes for over 85% of the benchmark cases.

A WSPD, Separator and Small Tree Cover for c-packed Graphs

from arXiv: Computational Geometry

Authors: Lindsey Deryckere, Joachim Gudmundsson, André van Renssen, Yuan Sha, Sampson Wong

The $c$-packedness property, proposed in 2010, is a geometric property that captures the spatial distribution of a set of edges. Despite the recent interest in $c$-packedness, its utility has so far been limited to Fr\'echet distance problems. An open problem is whether a wider variety of algorithmic and data structure problems can be solved efficiently under the $c$-packedness assumption, and more specifically, on $c$-packed graphs. In this paper, we prove two fundamental properties of $c$-packed graphs: that there exists a linear-size well-separated pair decomposition under the graph metric, and there exists a constant size balanced separator. We then apply these fundamental properties to obtain a small tree cover for the metric space and distance oracles under the shortest path metric. In particular, we obtain a tree cover of constant size, an exact distance oracle of near-linear size and an approximate distance oracle of linear size.

Authors: Lindsey Deryckere, Joachim Gudmundsson, André van Renssen, Yuan Sha, Sampson Wong

The $c$-packedness property, proposed in 2010, is a geometric property that captures the spatial distribution of a set of edges. Despite the recent interest in $c$-packedness, its utility has so far been limited to Fr\'echet distance problems. An open problem is whether a wider variety of algorithmic and data structure problems can be solved efficiently under the $c$-packedness assumption, and more specifically, on $c$-packed graphs. In this paper, we prove two fundamental properties of $c$-packed graphs: that there exists a linear-size well-separated pair decomposition under the graph metric, and there exists a constant size balanced separator. We then apply these fundamental properties to obtain a small tree cover for the metric space and distance oracles under the shortest path metric. In particular, we obtain a tree cover of constant size, an exact distance oracle of near-linear size and an approximate distance oracle of linear size.

Hamiltonian Locality Testing via Trotterized Postselection

from arXiv: Data Structures and Algorithms

Authors: John Kallaugher, Daniel Liang

The (tolerant) Hamiltonian locality testing problem, introduced in [Bluhm, Caro,Oufkir `24], is to determine whether a Hamiltonian $H$ is $\varepsilon_1$-close to being $k$-local (i.e. can be written as the sum of weight-$k$ Pauli operators) or $\varepsilon_2$-far from any $k$-local Hamiltonian, given access to its time evolution operator and using as little total evolution time as possible, with distance typically defined by the normalized Frobenius norm. We give the tightest known bounds for this problem, proving an $\text{O}\left(\sqrt{\frac{\varepsilon_2}{(\varepsilon_2-\varepsilon_1)^5}}\right)$ evolution time upper bound and an $\Omega\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ lower bound. Our algorithm does not require reverse time evolution or controlled application of the time evolution operator, although our lower bound applies to algorithms using either tool. Furthermore, we show that if we are allowed reverse time evolution, this lower bound is tight, giving a matching $\text{O}\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ evolution time algorithm.

Authors: John Kallaugher, Daniel Liang

The (tolerant) Hamiltonian locality testing problem, introduced in [Bluhm, Caro,Oufkir `24], is to determine whether a Hamiltonian $H$ is $\varepsilon_1$-close to being $k$-local (i.e. can be written as the sum of weight-$k$ Pauli operators) or $\varepsilon_2$-far from any $k$-local Hamiltonian, given access to its time evolution operator and using as little total evolution time as possible, with distance typically defined by the normalized Frobenius norm. We give the tightest known bounds for this problem, proving an $\text{O}\left(\sqrt{\frac{\varepsilon_2}{(\varepsilon_2-\varepsilon_1)^5}}\right)$ evolution time upper bound and an $\Omega\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ lower bound. Our algorithm does not require reverse time evolution or controlled application of the time evolution operator, although our lower bound applies to algorithms using either tool. Furthermore, we show that if we are allowed reverse time evolution, this lower bound is tight, giving a matching $\text{O}\left(\frac{1}{\varepsilon_2-\varepsilon_1}\right)$ evolution time algorithm.