Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Wednesday, September 11

TR24-140 | Almost-catalytic Computation | Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Rai, Jayalal Sarma

from ECCC Papers

Designing algorithms for space bounded models with restoration requirements on (most of) the space used by the algorithm is an important challenge posed about the catalytic computation model introduced by Buhrman et al (2014). Motivated by the scenarios where we do not need to restore unless $w$ is "useful", we relax the restoration requirement: only when the content of the catalytic tape is $w \in A \subseteq \Sigma^*$, the catalytic Turing machine needs to restore $w$ at the end of the computation. We define, $ACL(A)$ to be the class of languages that can be accepted by almost-catalytic Turing machines with respect to $A$ (which we call the catalytic set), that uses at most $c\log n$ work space and $n^c$ catalytic space. We prove the following for the almost-catalytic model. - We show that if there are almost-catalytic algorithms for a problem with catalytic set as $A \subseteq \Sigma^*$ and its complement respectively, then the problem can be solved by a zero-error randomized algorithm that runs in expected polynomial time. More formally, for any language $A \subseteq \Sigma^*$, $ACL(A) \cap ACL(\overline{A}) \subseteq ZPP$. In particular, when $A \in LOG$, $ACL(A) \cap ACL(\overline{A}) = CL$. This leads to newer algorithmic approaches for designing catalytic algorithms. - Using the above, we derive that to design catalytic algorithms for a language, it suffices to design almost-catalytic algorithms where the catalytic set is the set of strings of odd weight ($PARITY$). Towards this, we consider two complexity measures of the set $A$ which are maximized for $PARITY$. One is the random projection complexity (denoted by ${\cal R}(A)$) and the other is the subcube partition complexity (denoted by ${\cal P}(A)$). We show that, for all $k \ge 1$, there exists a language $A_k \subseteq \Sigma^* $ such that $DSPACE(n^k) \subseteq ACL(A_k)$ where for every $m \ge 1$, ${\cal R}(A_k \cap \{0,1\}^m) \ge \frac{m}{4}$ and ${\cal P}(A_k \cap \{0,1\}^m)=2^{m/4}$. This is in contrast to the catalytic machine model where it is unclear if it can accept all languages in $DSPACE(\log^{1+\epsilon} n)$ for any $\epsilon > 0$. - Improving the partition complexity of the restoration set $A$ further, we show that for all $k \ge 1$, there exists $A_k \subseteq \{0,1\}^*$ such that $DSPACE(\log^k n) \subseteq ACL(A_k)$ where for every $m \ge 1$, ${\cal R}(A_k \cap \{0,1\}^m) \ge \frac{m}{4}$ and ${\cal P}(A_k \cap \{0,1\}^m)=2^{m/4+\Omega(\log m)}$. Our main new technique for the last two items is the use of error correcting codes to design almost-catalytic algorithms. - We also show that, even when there are more than two alphabet symbols, if the catalytic set $A$ does not use one of the alphabet symbols, then efficient almost-catalytic algorithms with $A$ as the catalytic set can be designed for any language in $PSPACE$.

Designing algorithms for space bounded models with restoration requirements on (most of) the space used by the algorithm is an important challenge posed about the catalytic computation model introduced by Buhrman et al (2014). Motivated by the scenarios where we do not need to restore unless $w$ is "useful", we relax the restoration requirement: only when the content of the catalytic tape is $w \in A \subseteq \Sigma^*$, the catalytic Turing machine needs to restore $w$ at the end of the computation. We define, $ACL(A)$ to be the class of languages that can be accepted by almost-catalytic Turing machines with respect to $A$ (which we call the catalytic set), that uses at most $c\log n$ work space and $n^c$ catalytic space. We prove the following for the almost-catalytic model. - We show that if there are almost-catalytic algorithms for a problem with catalytic set as $A \subseteq \Sigma^*$ and its complement respectively, then the problem can be solved by a zero-error randomized algorithm that runs in expected polynomial time. More formally, for any language $A \subseteq \Sigma^*$, $ACL(A) \cap ACL(\overline{A}) \subseteq ZPP$. In particular, when $A \in LOG$, $ACL(A) \cap ACL(\overline{A}) = CL$. This leads to newer algorithmic approaches for designing catalytic algorithms. - Using the above, we derive that to design catalytic algorithms for a language, it suffices to design almost-catalytic algorithms where the catalytic set is the set of strings of odd weight ($PARITY$). Towards this, we consider two complexity measures of the set $A$ which are maximized for $PARITY$. One is the random projection complexity (denoted by ${\cal R}(A)$) and the other is the subcube partition complexity (denoted by ${\cal P}(A)$). We show that, for all $k \ge 1$, there exists a language $A_k \subseteq \Sigma^* $ such that $DSPACE(n^k) \subseteq ACL(A_k)$ where for every $m \ge 1$, ${\cal R}(A_k \cap \{0,1\}^m) \ge \frac{m}{4}$ and ${\cal P}(A_k \cap \{0,1\}^m)=2^{m/4}$. This is in contrast to the catalytic machine model where it is unclear if it can accept all languages in $DSPACE(\log^{1+\epsilon} n)$ for any $\epsilon > 0$. - Improving the partition complexity of the restoration set $A$ further, we show that for all $k \ge 1$, there exists $A_k \subseteq \{0,1\}^*$ such that $DSPACE(\log^k n) \subseteq ACL(A_k)$ where for every $m \ge 1$, ${\cal R}(A_k \cap \{0,1\}^m) \ge \frac{m}{4}$ and ${\cal P}(A_k \cap \{0,1\}^m)=2^{m/4+\Omega(\log m)}$. Our main new technique for the last two items is the use of error correcting codes to design almost-catalytic algorithms. - We also show that, even when there are more than two alphabet symbols, if the catalytic set $A$ does not use one of the alphabet symbols, then efficient almost-catalytic algorithms with $A$ as the catalytic set can be designed for any language in $PSPACE$.

TR24-139 | Distinguishing, Predicting, and Certifying: On the Long Reach of Partial Notions of Pseudorandomness | Jiatu Li, Edward Pyne, Roei Tell

from ECCC Papers

This paper revisits the study of two classical technical tools in theoretical computer science: Yao's transformation of distinguishers to next-bit predictors (FOCS 1982), and the ``reconstruction paradigm'' in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of derandomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: \begin{itemize} \item We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish-to-predict transformations is equivalent to prBPP$=$prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP$=$prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. \item Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL$=$UL; and proofs that derandomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). \end{itemize} In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called ``Lossy Code''. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.

This paper revisits the study of two classical technical tools in theoretical computer science: Yao's transformation of distinguishers to next-bit predictors (FOCS 1982), and the ``reconstruction paradigm'' in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of derandomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: \begin{itemize} \item We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish-to-predict transformations is equivalent to prBPP$=$prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP$=$prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. \item Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL$=$UL; and proofs that derandomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). \end{itemize} In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called ``Lossy Code''. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.

Natural Proofs is Not the Barrier You Think It Is

from Computational Complexity

If there's a position where I differ from most other complexity theorists it's that I don't believe that natural proofs present a significant barrier to proving circuit results. I wrote about this before but I buried the lead and nobody noticed.

Let's review natural proofs. I'll give a very high-level description. Li-Yang Tan gives a good technical description or you could read the original Razborov-Rudich paper. A natural proof to prove lower bounds against a circuit class \(\cal C\) consists of a collection \(C_n\) of Boolean functions on \(n\) inputs such that

  1. No polynomial-size circuit family from \(\cal C\) can compute an element of \(C_n\) for large enough \(n\). 
  2. \(C_n\) is a large fraction of all the function on \(n\) inputs.
  3. A subset of \(C_n\) is constructive--given the truth-table of a function, you can determine whether it sits in the subset in time polynomial in the length of the truth-table. Note: This is a different than the usual notion of "constructive proof". 
The natural proof theorem states that if all three conditions hold than you can break pseudorandom generators and one-way functions.
My problem is with the third property, constructivity. I haven't seen good reasons why a proof should be constructive. When I saw Rudich give an early talk on the paper, he both had to change the definition of constructivity (allowing subsets instead of requiring an algorithm for \(C_n\) itself) and needed to give heavily modified proofs of old theorems to make them constructive. Nothing natural about it. Compare this to the often maligned relativization where most proofs in complexity relativize without any changes.
Even Razborov and Rudich acknowledge they don't have a good argument for constructivity.
We do not have any formal evidence for constructivity, but from experience it is plausible to say that we do not yet understand the mathematics of \(C_n\) outside exponential time (as a function of \(n\)) well enough to use them effectively in a combinatorial style proof.
Let's call a proof semi-natural if conditions (1) and (2) hold. If you have a semi-natural proof you get the following implication. 
Constructivity \(\Rightarrow\) One-way Functions Fail
In other words, you still get the lower bound, just with the caveat that if an algorithm exists for the property than an algorithm exists to break a one-way function. You still get the lower bound, but you are not breaking a one-way function, just showing that recognizing the proofs would be as hard as breaking one-way functions. An algorithm begets another algorithm. You don't have to determine constructivity either way to get the lower bound. 
Even if they aren't a great barrier to circuit lower bounds, natural proofs can be an interesting, if badly named, concept in their own right. For example the Carmosino-Impagliazzo-Kabanets-Kolokolova paper Learning Algorithms from Natural Proofs.
So if I don't believe in the barrier, why are circuit lower bounds hard? In recent years, we've seen the surprising power of neural nets which roughly correspond to the complexity class TC\(^0\), and we simply don't know how to prove lower bounds against powerful computational models. Blame our limited ability to understand computation, not a natural proof barrier that really isn't there.

By Lance Fortnow

If there's a position where I differ from most other complexity theorists it's that I don't believe that natural proofs present a significant barrier to proving circuit results. I wrote about this before but I buried the lead and nobody noticed.

Let's review natural proofs. I'll give a very high-level description. Li-Yang Tan gives a good technical description or you could read the original Razborov-Rudich paper. A natural proof to prove lower bounds against a circuit class \(\cal C\) consists of a collection \(C_n\) of Boolean functions on \(n\) inputs such that

  1. No polynomial-size circuit family from \(\cal C\) can compute an element of \(C_n\) for large enough \(n\). 
  2. \(C_n\) is a large fraction of all the function on \(n\) inputs.
  3. A subset of \(C_n\) is constructive--given the truth-table of a function, you can determine whether it sits in the subset in time polynomial in the length of the truth-table. Note: This is a different than the usual notion of "constructive proof". 
The natural proof theorem states that if all three conditions hold than you can break pseudorandom generators and one-way functions.

My problem is with the third property, constructivity. I haven't seen good reasons why a proof should be constructive. When I saw Rudich give an early talk on the paper, he both had to change the definition of constructivity (allowing subsets instead of requiring an algorithm for \(C_n\) itself) and needed to give heavily modified proofs of old theorems to make them constructive. Nothing natural about it. Compare this to the often maligned relativization where most proofs in complexity relativize without any changes.

Even Razborov and Rudich acknowledge they don't have a good argument for constructivity.
We do not have any formal evidence for constructivity, but from experience it is plausible to say that we do not yet understand the mathematics of \(C_n\) outside exponential time (as a function of \(n\)) well enough to use them effectively in a combinatorial style proof.
Let's call a proof semi-natural if conditions (1) and (2) hold. If you have a semi-natural proof you get the following implication. 

Constructivity \(\Rightarrow\) One-way Functions Fail

In other words, you still get the lower bound, just with the caveat that if an algorithm exists for the property than an algorithm exists to break a one-way function. You still get the lower bound, but you are not breaking a one-way function, just showing that recognizing the proofs would be as hard as breaking one-way functions. An algorithm begets another algorithm. You don't have to determine constructivity either way to get the lower bound. 

Even if they aren't a great barrier to circuit lower bounds, natural proofs can be an interesting, if badly named, concept in their own right. For example the Carmosino-Impagliazzo-Kabanets-Kolokolova paper Learning Algorithms from Natural Proofs.

So if I don't believe in the barrier, why are circuit lower bounds hard? In recent years, we've seen the surprising power of neural nets which roughly correspond to the complexity class TC\(^0\), and we simply don't know how to prove lower bounds against powerful computational models. Blame our limited ability to understand computation, not a natural proof barrier that really isn't there.

By Lance Fortnow

How to Verify Any (Reasonable) Distribution Property: Computationally Sound Argument Systems for Distributions

from arXiv: Computational Complexity

Authors: Tal Herman, Guy Rothblum

As statistical analyses become more central to science, industry and society, there is a growing need to ensure correctness of their results. Approximate correctness can be verified by replicating the entire analysis, but can we verify without replication? Building on a recent line of work, we study proof-systems that allow a probabilistic verifier to ascertain that the results of an analysis are approximately correct, while drawing fewer samples and using less computational resources than would be needed to replicate the analysis. We focus on distribution testing problems: verifying that an unknown distribution is close to having a claimed property. Our main contribution is a interactive protocol between a verifier and an untrusted prover, which can be used to verify any distribution property that can be decided in polynomial time given a full and explicit description of the distribution. If the distribution is at statistical distance $\varepsilon$ from having the property, then the verifier rejects with high probability. This soundness property holds against any polynomial-time strategy that a cheating prover might follow, assuming the existence of collision-resistant hash functions (a standard assumption in cryptography). For distributions over a domain of size $N$, the protocol consists of $4$ messages and the communication complexity and verifier runtime are roughly $\widetilde{O}\left(\sqrt{N} / \varepsilon^2 \right)$. The verifier's sample complexity is $\widetilde{O}\left(\sqrt{N} / \varepsilon^2 \right)$, and this is optimal up to $\polylog(N)$ factors (for any protocol, regardless of its communication complexity). Even for simple properties, approximately deciding whether an unknown distribution has the property can require quasi-linear sample complexity and running time. For any such property, our protocol provides a quadratic speedup over replicating the analysis.

Authors: Tal Herman, Guy Rothblum

As statistical analyses become more central to science, industry and society, there is a growing need to ensure correctness of their results. Approximate correctness can be verified by replicating the entire analysis, but can we verify without replication? Building on a recent line of work, we study proof-systems that allow a probabilistic verifier to ascertain that the results of an analysis are approximately correct, while drawing fewer samples and using less computational resources than would be needed to replicate the analysis. We focus on distribution testing problems: verifying that an unknown distribution is close to having a claimed property. Our main contribution is a interactive protocol between a verifier and an untrusted prover, which can be used to verify any distribution property that can be decided in polynomial time given a full and explicit description of the distribution. If the distribution is at statistical distance $\varepsilon$ from having the property, then the verifier rejects with high probability. This soundness property holds against any polynomial-time strategy that a cheating prover might follow, assuming the existence of collision-resistant hash functions (a standard assumption in cryptography). For distributions over a domain of size $N$, the protocol consists of $4$ messages and the communication complexity and verifier runtime are roughly $\widetilde{O}\left(\sqrt{N} / \varepsilon^2 \right)$. The verifier's sample complexity is $\widetilde{O}\left(\sqrt{N} / \varepsilon^2 \right)$, and this is optimal up to $\polylog(N)$ factors (for any protocol, regardless of its communication complexity). Even for simple properties, approximately deciding whether an unknown distribution has the property can require quasi-linear sample complexity and running time. For any such property, our protocol provides a quadratic speedup over replicating the analysis.

Proceedings 14th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2024)

from arXiv: Computational Complexity

Authors: Florin Manea, Giovanni Pighizzini

The Fourteenth International Workshop on Non-Classical Models of Automata and Applications (NCMA 2024) was held in G\"ottingen, Germany, on August 12 and 13, 2024, at the historic Georg-Augustus-Universit\"at, organized by the Theoretical Computer Science research group of the respective university. The NCMA workshop series was established in 2009 as an annual event for researchers working on non-classical and classical models of automata, grammars or related devices. Such models are investigated both as theoretical models and as formal models for applications from various points of view. The goal of the NCMA workshop series is to exchange and develop novel ideas in order to gain deeper and interdisciplinary coverage of this particular area that may foster new insights and substantial progress.

Authors: Florin Manea, Giovanni Pighizzini

The Fourteenth International Workshop on Non-Classical Models of Automata and Applications (NCMA 2024) was held in G\"ottingen, Germany, on August 12 and 13, 2024, at the historic Georg-Augustus-Universit\"at, organized by the Theoretical Computer Science research group of the respective university. The NCMA workshop series was established in 2009 as an annual event for researchers working on non-classical and classical models of automata, grammars or related devices. Such models are investigated both as theoretical models and as formal models for applications from various points of view. The goal of the NCMA workshop series is to exchange and develop novel ideas in order to gain deeper and interdisciplinary coverage of this particular area that may foster new insights and substantial progress.

Classification and degenerations of small minimal border rank tensors via modules

from arXiv: Computational Complexity

Authors: Jakub Jagiełła, Joachim Jelisiejew

We give a self-contained classification of $1_*$-generic minimal border rank tensors in $C^m \otimes C^m \otimes C^m$ for $m \leq 5$. Together with previous results, this gives a classification of all minimal border rank tensors in $C^m \otimes C^m \otimes C^m$ for $m \leq 5$: there are $37$ isomorphism classes. We fully describe possible degenerations among the tensors. We prove that there are no $1$-degenerate minimal border rank tensors in $C^m \otimes C^m \otimes C^m $ for $m \leq 4$.

Authors: Jakub Jagiełła, Joachim Jelisiejew

We give a self-contained classification of $1_*$-generic minimal border rank tensors in $C^m \otimes C^m \otimes C^m$ for $m \leq 5$. Together with previous results, this gives a classification of all minimal border rank tensors in $C^m \otimes C^m \otimes C^m$ for $m \leq 5$: there are $37$ isomorphism classes. We fully describe possible degenerations among the tensors. We prove that there are no $1$-degenerate minimal border rank tensors in $C^m \otimes C^m \otimes C^m $ for $m \leq 4$.

Coordinated Motion Planning: Multi-Agent Path Finding in a Densely Packed, Bounded Domain

from arXiv: Computational Geometry

Authors: Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, Christian Scheffer

We study Multi-Agent Path Finding for arrangements of labeled agents in the interior of a simply connected domain: Given a unique start and target position for each agent, the goal is to find a sequence of parallel, collision-free agent motions that minimizes the overall time (the makespan) until all agents have reached their respective targets. A natural case is that of a simply connected polygonal domain with axis-parallel boundaries and integer coordinates, i.e., a simple polyomino, which amounts to a simply connected union of lattice unit squares or cells. We focus on the particularly challenging setting of densely packed agents, i.e., one per cell, which strongly restricts the mobility of agents, and requires intricate coordination of motion. We provide a variety of novel results for this problem, including (1) a characterization of polyominoes in which a reconfiguration plan is guaranteed to exist; (2) a characterization of shape parameters that induce worst-case bounds on the makespan; (3) a suite of algorithms to achieve asymptotically worst-case optimal performance with respect to the achievable stretch for cases with severely limited maneuverability. This corresponds to bounding the ratio between obtained makespan and the lower bound provided by the max-min distance between the start and target position of any agent and our shape parameters. Our results extend findings by Demaine et al. (SIAM Journal on Computing, 2019) who investigated the problem for solid rectangular domains, and in the closely related field of Permutation Routing, as presented by Alpert et al. (Computational Geometry, 2022) for convex pieces of grid graphs.

Authors: Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, Christian Scheffer

We study Multi-Agent Path Finding for arrangements of labeled agents in the interior of a simply connected domain: Given a unique start and target position for each agent, the goal is to find a sequence of parallel, collision-free agent motions that minimizes the overall time (the makespan) until all agents have reached their respective targets. A natural case is that of a simply connected polygonal domain with axis-parallel boundaries and integer coordinates, i.e., a simple polyomino, which amounts to a simply connected union of lattice unit squares or cells. We focus on the particularly challenging setting of densely packed agents, i.e., one per cell, which strongly restricts the mobility of agents, and requires intricate coordination of motion. We provide a variety of novel results for this problem, including (1) a characterization of polyominoes in which a reconfiguration plan is guaranteed to exist; (2) a characterization of shape parameters that induce worst-case bounds on the makespan; (3) a suite of algorithms to achieve asymptotically worst-case optimal performance with respect to the achievable stretch for cases with severely limited maneuverability. This corresponds to bounding the ratio between obtained makespan and the lower bound provided by the max-min distance between the start and target position of any agent and our shape parameters. Our results extend findings by Demaine et al. (SIAM Journal on Computing, 2019) who investigated the problem for solid rectangular domains, and in the closely related field of Permutation Routing, as presented by Alpert et al. (Computational Geometry, 2022) for convex pieces of grid graphs.

Multi-scale Cycle Tracking in Dynamic Planar Graphs

from arXiv: Computational Geometry

Authors: Farhan Rasheed, Abrar Naseer, Emma Nilsson, Talha Bin Masood, Ingrid Hotz

This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into segments bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photoelastic disks.

Authors: Farhan Rasheed, Abrar Naseer, Emma Nilsson, Talha Bin Masood, Ingrid Hotz

This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into segments bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photoelastic disks.

Harmonic Chain Barcode and Stability

from arXiv: Computational Geometry

Authors: Salman Parsa, Bei Wang

The persistence barcode is a topological descriptor of data that plays a fundamental role in topological data analysis. Given a filtration of the space of data, a persistence barcode tracks the evolution of its homological features. In this paper, we introduce a novel type of barcode, referred to as the canonical barcode of harmonic chains, or harmonic chain barcode for short, which tracks the evolution of harmonic chains. As our main result, we show that the harmonic chain barcode is stable and it captures both geometric and topological information of data. Moreover, given a filtration of a simplicial complex of size $n$ with $m$ time steps, we can compute its harmonic chain barcode in $O(m^2n^{\omega} + mn^3)$ time, where $n^\omega$ is the matrix multiplication time. Consequently, a harmonic chain barcode can be utilized in applications in which a persistence barcode is applicable, such as feature vectorization and machine learning. Our work provides strong evidence in a growing list of literature that geometric (not just topological) information can be recovered from a persistence filtration.

Authors: Salman Parsa, Bei Wang

The persistence barcode is a topological descriptor of data that plays a fundamental role in topological data analysis. Given a filtration of the space of data, a persistence barcode tracks the evolution of its homological features. In this paper, we introduce a novel type of barcode, referred to as the canonical barcode of harmonic chains, or harmonic chain barcode for short, which tracks the evolution of harmonic chains. As our main result, we show that the harmonic chain barcode is stable and it captures both geometric and topological information of data. Moreover, given a filtration of a simplicial complex of size $n$ with $m$ time steps, we can compute its harmonic chain barcode in $O(m^2n^{\omega} + mn^3)$ time, where $n^\omega$ is the matrix multiplication time. Consequently, a harmonic chain barcode can be utilized in applications in which a persistence barcode is applicable, such as feature vectorization and machine learning. Our work provides strong evidence in a growing list of literature that geometric (not just topological) information can be recovered from a persistence filtration.

Adversary Resilient Learned Bloom Filters

from arXiv: Data Structures and Algorithms

Authors: Allison Bishop, Hayder Tirmazi

Creating an adversary resilient Learned Bloom Filter \cite{learnedindexstructures} with provable guarantees is an open problem \cite{reviriego1}. We define a strong adversarial model for the Learned Bloom Filter. We also construct two adversary resilient variants of the Learned Bloom Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e not ``Learned'') Bloom Filter by Naor Yogev~\cite{moni1} and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). We show that if pseudo-random permutations exist, then a secure Learned Bloom Filter may be constructed with $\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We further show that, if pseudo-random permutations exist, then a \textit{high utility} Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. Finally, we construct a hybrid adversarial model for the case where a fraction of the workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.

Authors: Allison Bishop, Hayder Tirmazi

Creating an adversary resilient Learned Bloom Filter \cite{learnedindexstructures} with provable guarantees is an open problem \cite{reviriego1}. We define a strong adversarial model for the Learned Bloom Filter. We also construct two adversary resilient variants of the Learned Bloom Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e not ``Learned'') Bloom Filter by Naor Yogev~\cite{moni1} and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). We show that if pseudo-random permutations exist, then a secure Learned Bloom Filter may be constructed with $\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We further show that, if pseudo-random permutations exist, then a \textit{high utility} Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. Finally, we construct a hybrid adversarial model for the case where a fraction of the workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.

Learning Multiple Secrets in Mastermind

from arXiv: Data Structures and Algorithms

Authors: Milind Prabhu, David Woodruff

In the Generalized Mastermind problem, there is an unknown subset $H$ of the hypercube $\{0,1\}^d$ containing $n$ points. The goal is to learn $H$ by making a few queries to an oracle, which, given a point $q$ in $\{0,1\}^d$, returns the point in $H$ nearest to $q$. We give a two-round adaptive algorithm for this problem that learns $H$ while making at most $\exp(\tilde{O}(\sqrt{d \log n}))$ queries. Furthermore, we show that any $r$-round adaptive randomized algorithm that learns $H$ with constant probability must make $\exp(\Omega(d^{3^{-(r-1)}}))$ queries even when the input has $\text{poly}(d)$ points; thus, any $\text{poly}(d)$ query algorithm must necessarily use $\Omega(\log \log d)$ rounds of adaptivity. We give optimal query complexity bounds for the variant of the problem where queries are allowed to be from $\{0,1,2\}^d$. We also study a continuous variant of the problem in which $H$ is a subset of unit vectors in $\mathbb{R}^d$, and one can query unit vectors in $\mathbb{R}^d$. For this setting, we give an $O(n^{d/2})$ query deterministic algorithm to learn the hidden set of points.

Authors: Milind Prabhu, David Woodruff

In the Generalized Mastermind problem, there is an unknown subset $H$ of the hypercube $\{0,1\}^d$ containing $n$ points. The goal is to learn $H$ by making a few queries to an oracle, which, given a point $q$ in $\{0,1\}^d$, returns the point in $H$ nearest to $q$. We give a two-round adaptive algorithm for this problem that learns $H$ while making at most $\exp(\tilde{O}(\sqrt{d \log n}))$ queries. Furthermore, we show that any $r$-round adaptive randomized algorithm that learns $H$ with constant probability must make $\exp(\Omega(d^{3^{-(r-1)}}))$ queries even when the input has $\text{poly}(d)$ points; thus, any $\text{poly}(d)$ query algorithm must necessarily use $\Omega(\log \log d)$ rounds of adaptivity. We give optimal query complexity bounds for the variant of the problem where queries are allowed to be from $\{0,1,2\}^d$. We also study a continuous variant of the problem in which $H$ is a subset of unit vectors in $\mathbb{R}^d$, and one can query unit vectors in $\mathbb{R}^d$. For this setting, we give an $O(n^{d/2})$ query deterministic algorithm to learn the hidden set of points.

Position Fair Mechanisms Allocating Indivisible Goods

from arXiv: Data Structures and Algorithms

Authors: Ryoga Mahara, Ryuhei Mizutani, Taihei Oki, Tomohiko Yokoyama

In the fair division problem for indivisible goods, mechanisms that output allocations satisfying fairness concepts, such as envy-freeness up to one good (EF1), have been extensively studied. These mechanisms usually require an arbitrary order of agents as input, which may cause some agents to feel unfair since the order affects the output allocations. In the context of the cake-cutting problem, Manabe and Okamoto (2012) introduced meta-envy-freeness to capture such kind of fairness, which guarantees the absence of envy compared to different orders of agents. In this paper, we introduce position envy-freeness and its relaxation, position envy-freeness up to $k$ goods (PEF$k$), for mechanisms in the fair division problem for indivisible goods, analogous to the meta-envy-freeness. While the round-robin or the envy-cycle mechanism is not PEF1, we propose a PEF1 mechanism that always outputs an EF1 allocation. In addition, in the case of two agents, we prove that any mechanism that always returns a maximum Nash social welfare allocation is PEF1, and propose a modified adjusted winner mechanism satisfying PEF1. We further investigate the round-robin and the envy-cycle mechanisms to measure how far they are from position envy-freeness.

Authors: Ryoga Mahara, Ryuhei Mizutani, Taihei Oki, Tomohiko Yokoyama

In the fair division problem for indivisible goods, mechanisms that output allocations satisfying fairness concepts, such as envy-freeness up to one good (EF1), have been extensively studied. These mechanisms usually require an arbitrary order of agents as input, which may cause some agents to feel unfair since the order affects the output allocations. In the context of the cake-cutting problem, Manabe and Okamoto (2012) introduced meta-envy-freeness to capture such kind of fairness, which guarantees the absence of envy compared to different orders of agents. In this paper, we introduce position envy-freeness and its relaxation, position envy-freeness up to $k$ goods (PEF$k$), for mechanisms in the fair division problem for indivisible goods, analogous to the meta-envy-freeness. While the round-robin or the envy-cycle mechanism is not PEF1, we propose a PEF1 mechanism that always outputs an EF1 allocation. In addition, in the case of two agents, we prove that any mechanism that always returns a maximum Nash social welfare allocation is PEF1, and propose a modified adjusted winner mechanism satisfying PEF1. We further investigate the round-robin and the envy-cycle mechanisms to measure how far they are from position envy-freeness.

Structured Downsampling for Fast, Memory-efficient Curation of Online Data Streams

from arXiv: Data Structures and Algorithms

Authors: Matthew Andres Moreno, Luis Zaman, Emily Dolson

Operations over data streams typically hinge on efficient mechanisms to aggregate or summarize history on a rolling basis. For high-volume data steams, it is critical to manage state in a manner that is fast and memory efficient -- particularly in resource-constrained or real-time contexts. Here, we address the problem of extracting a fixed-capacity, rolling subsample from a data stream. Specifically, we explore ``data stream curation'' strategies to fulfill requirements on the composition of sample time points retained. Our ``DStream'' suite of algorithms targets three temporal coverage criteria: (1) steady coverage, where retained samples should spread evenly across elapsed data stream history; (2) stretched coverage, where early data items should be proportionally favored; and (3) tilted coverage, where recent data items should be proportionally favored. For each algorithm, we prove worst-case bounds on rolling coverage quality. We focus on the more practical, application-driven case of maximizing coverage quality given a fixed memory capacity. As a core simplifying assumption, we restrict algorithm design to a single update operation: writing from the data stream to a calculated buffer site -- with data never being read back, no metadata stored (e.g., sample timestamps), and data eviction occurring only implicitly via overwrite. Drawing only on primitive, low-level operations and ensuring full, overhead-free use of available memory, this ``DStream'' framework ideally suits domains that are resource-constrained, performance-critical, and fine-grained (e.g., individual data items as small as single bits or bytes). The proposed approach supports $\mathcal{O}(1)$ data ingestion via concise bit-level operations. To further practical applications, we provide plug-and-play open-source implementations targeting both scripted and compiled application domains.

Authors: Matthew Andres Moreno, Luis Zaman, Emily Dolson

Operations over data streams typically hinge on efficient mechanisms to aggregate or summarize history on a rolling basis. For high-volume data steams, it is critical to manage state in a manner that is fast and memory efficient -- particularly in resource-constrained or real-time contexts. Here, we address the problem of extracting a fixed-capacity, rolling subsample from a data stream. Specifically, we explore ``data stream curation'' strategies to fulfill requirements on the composition of sample time points retained. Our ``DStream'' suite of algorithms targets three temporal coverage criteria: (1) steady coverage, where retained samples should spread evenly across elapsed data stream history; (2) stretched coverage, where early data items should be proportionally favored; and (3) tilted coverage, where recent data items should be proportionally favored. For each algorithm, we prove worst-case bounds on rolling coverage quality. We focus on the more practical, application-driven case of maximizing coverage quality given a fixed memory capacity. As a core simplifying assumption, we restrict algorithm design to a single update operation: writing from the data stream to a calculated buffer site -- with data never being read back, no metadata stored (e.g., sample timestamps), and data eviction occurring only implicitly via overwrite. Drawing only on primitive, low-level operations and ensuring full, overhead-free use of available memory, this ``DStream'' framework ideally suits domains that are resource-constrained, performance-critical, and fine-grained (e.g., individual data items as small as single bits or bytes). The proposed approach supports $\mathcal{O}(1)$ data ingestion via concise bit-level operations. To further practical applications, we provide plug-and-play open-source implementations targeting both scripted and compiled application domains.

On the joint embedding property for cographs and trees

from arXiv: Data Structures and Algorithms

Authors: Daniel Carter

A family of graphs $\mathcal{F}$ is said to have the joint embedding property (JEP) if for every $G_1, G_2\in \mathcal{F}$, there is an $H\in \mathcal{F}$ that contains both $G_1$ and $G_2$ as induced subgraphs. If $\mathcal{F}$ is given by a finite set $S$ of forbidden induced subgraphs, it is known that determining if $\mathcal{F}$ has JEP is undecidable. We prove that this problem is decidable if $P_4\in S$ and generalize this result to families of rooted labeled trees under topological containment, bounded treewidth families under the graph minor relation, and bounded cliquewidth families under the induced subgraph relation.

Authors: Daniel Carter

A family of graphs $\mathcal{F}$ is said to have the joint embedding property (JEP) if for every $G_1, G_2\in \mathcal{F}$, there is an $H\in \mathcal{F}$ that contains both $G_1$ and $G_2$ as induced subgraphs. If $\mathcal{F}$ is given by a finite set $S$ of forbidden induced subgraphs, it is known that determining if $\mathcal{F}$ has JEP is undecidable. We prove that this problem is decidable if $P_4\in S$ and generalize this result to families of rooted labeled trees under topological containment, bounded treewidth families under the graph minor relation, and bounded cliquewidth families under the induced subgraph relation.

Exploring monotonic priority queues for Dijkstra optimization

from arXiv: Data Structures and Algorithms

Authors: Jonas Costa, Lucas Castro, Rosiane de Freitas

This paper presents a comprehensive overview of monotone priority queues, focusing on their evolution and application in shortest path algorithms. Monotone priority queues are characterized by the property that their minimum key does not decrease over time, making them particularly effective for label-setting algorithms like Dijkstra's. Some key data structures within this category are explored, emphasizing those derived directly from Dial's algorithm, including variations of multi-level bucket structures and radix heaps. Theoretical complexities and practical considerations of these structures are discussed, with insights into their development and refinement provided through a historical timeline.

Authors: Jonas Costa, Lucas Castro, Rosiane de Freitas

This paper presents a comprehensive overview of monotone priority queues, focusing on their evolution and application in shortest path algorithms. Monotone priority queues are characterized by the property that their minimum key does not decrease over time, making them particularly effective for label-setting algorithms like Dijkstra's. Some key data structures within this category are explored, emphasizing those derived directly from Dial's algorithm, including variations of multi-level bucket structures and radix heaps. Theoretical complexities and practical considerations of these structures are discussed, with insights into their development and refinement provided through a historical timeline.

Reconstructing semi-directed level-1 networks using few quarnets

from arXiv: Data Structures and Algorithms

Authors: Martin Frohn, Niels Holtgrefe, Leo van Iersel, Mark Jones, Steven Kelk

Semi-directed networks are partially directed graphs that model evolution where the directed edges represent reticulate evolutionary events. We present an algorithm that reconstructs binary $n$-leaf semi-directed level-1 networks in $O( n^2)$ time from its quarnets (4-leaf subnetworks). Our method assumes we have direct access to all quarnets, yet uses only an asymptotically optimal number of $O(n \log n)$ quarnets. Under group-based models of evolution with the Jukes-Cantor or Kimura 2-parameter constraints, it has been shown that only four-cycle quarnets and the splits of the other quarnets can practically be inferred with high accuracy from nucleotide sequence data. Our algorithm uses only this information, assuming the network contains no triangles. Additionally, we provide an $O(n^3)$ time algorithm that reconstructs the blobtree (or tree-of-blobs) of any binary $n$-leaf semi-directed network with unbounded level from $O(n^3)$ splits of its quarnets.

Authors: Martin Frohn, Niels Holtgrefe, Leo van Iersel, Mark Jones, Steven Kelk

Semi-directed networks are partially directed graphs that model evolution where the directed edges represent reticulate evolutionary events. We present an algorithm that reconstructs binary $n$-leaf semi-directed level-1 networks in $O( n^2)$ time from its quarnets (4-leaf subnetworks). Our method assumes we have direct access to all quarnets, yet uses only an asymptotically optimal number of $O(n \log n)$ quarnets. Under group-based models of evolution with the Jukes-Cantor or Kimura 2-parameter constraints, it has been shown that only four-cycle quarnets and the splits of the other quarnets can practically be inferred with high accuracy from nucleotide sequence data. Our algorithm uses only this information, assuming the network contains no triangles. Additionally, we provide an $O(n^3)$ time algorithm that reconstructs the blobtree (or tree-of-blobs) of any binary $n$-leaf semi-directed network with unbounded level from $O(n^3)$ splits of its quarnets.

Robust Max Selection

from arXiv: Data Structures and Algorithms

Authors: Trung Dang, Zhiyi Huang

We introduce a new model to study algorithm design under unreliable information, and apply this model for the problem of finding the uncorrupted maximum element of a list containing $n$ elements, among which are $k$ corrupted elements. Under our model, algorithms can perform black-box comparison queries between any pair of elements. However, queries regarding corrupted elements may have arbitrary output. In particular, corrupted elements do not need to behave as any consistent values, and may introduce cycles in the elements' ordering. This imposes new challenges for designing correct algorithms under this setting. For example, one cannot simply output a single element, as it is impossible to distinguish elements of a list containing one corrupted and one uncorrupted element. To ensure correctness, algorithms under this setting must output a set to make sure the uncorrupted maximum element is included. We first show that any algorithm must output a set of size at least $\min\{n, 2k + 1\}$ to ensure that the uncorrupted maximum is contained in the output set. Restricted to algorithms whose output size is exactly $\min\{n, 2k + 1\}$, for deterministic algorithms, we show matching upper and lower bounds of $\Theta(nk)$ comparison queries to produce a set of elements that contains the uncorrupted maximum. On the randomized side, we propose a 2-stage algorithm that, with high probability, uses $O(n + k \operatorname{polylog} k)$ comparison queries to find such a set, almost matching the $\Omega(n)$ queries necessary for any randomized algorithm to obtain a constant probability of being correct.

Authors: Trung Dang, Zhiyi Huang

We introduce a new model to study algorithm design under unreliable information, and apply this model for the problem of finding the uncorrupted maximum element of a list containing $n$ elements, among which are $k$ corrupted elements. Under our model, algorithms can perform black-box comparison queries between any pair of elements. However, queries regarding corrupted elements may have arbitrary output. In particular, corrupted elements do not need to behave as any consistent values, and may introduce cycles in the elements' ordering. This imposes new challenges for designing correct algorithms under this setting. For example, one cannot simply output a single element, as it is impossible to distinguish elements of a list containing one corrupted and one uncorrupted element. To ensure correctness, algorithms under this setting must output a set to make sure the uncorrupted maximum element is included. We first show that any algorithm must output a set of size at least $\min\{n, 2k + 1\}$ to ensure that the uncorrupted maximum is contained in the output set. Restricted to algorithms whose output size is exactly $\min\{n, 2k + 1\}$, for deterministic algorithms, we show matching upper and lower bounds of $\Theta(nk)$ comparison queries to produce a set of elements that contains the uncorrupted maximum. On the randomized side, we propose a 2-stage algorithm that, with high probability, uses $O(n + k \operatorname{polylog} k)$ comparison queries to find such a set, almost matching the $\Omega(n)$ queries necessary for any randomized algorithm to obtain a constant probability of being correct.

OciorCOOL: Faster Byzantine Agreement and Reliable Broadcast

from arXiv: Data Structures and Algorithms

Authors: Jinyuan Chen

COOL (Chen'21) is an error-free and deterministic Byzantine agreement protocol that achieves consensus on an $\ell$-bit message with a communication complexity of $O(\max\{n\ell, n t \log t \})$ bits in four phases, given $n\geq 3t + 1$, for a network of $n$ nodes, where up to $t$ nodes may be dishonest. In this work we show that COOL can be optimized by reducing one communication round. The new protocol is called OciorCOOL. Additionally, building on OciorCOOL, we design an optimal reliable broadcast protocol that requires only six communication rounds.

Authors: Jinyuan Chen

COOL (Chen'21) is an error-free and deterministic Byzantine agreement protocol that achieves consensus on an $\ell$-bit message with a communication complexity of $O(\max\{n\ell, n t \log t \})$ bits in four phases, given $n\geq 3t + 1$, for a network of $n$ nodes, where up to $t$ nodes may be dishonest. In this work we show that COOL can be optimized by reducing one communication round. The new protocol is called OciorCOOL. Additionally, building on OciorCOOL, we design an optimal reliable broadcast protocol that requires only six communication rounds.

Tuesday, September 10

TR24-138 | Fully Characterizing Lossy Catalytic Computation | Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker

from ECCC Papers

A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace ($CL$), can compute problems which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] = CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space ($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We show that $$LCSPACE[s,c,e] = CSPACE[\Theta(s + e \log c), \Theta(c)]$$ In other words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional $e \log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e \log n] \subseteq ZPP$, thus giving a barrier to any improvement beyond $LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and randomized catalytic space.

A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace ($CL$), can compute problems which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] = CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space ($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We show that $$LCSPACE[s,c,e] = CSPACE[\Theta(s + e \log c), \Theta(c)]$$ In other words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional $e \log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e \log n] \subseteq ZPP$, thus giving a barrier to any improvement beyond $LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and randomized catalytic space.

Assistant Teaching Professor – CSE at University of California – San Diego (apply by October 13, 2024)

from CCI: jobs

The University of California San Diego Computer Science and Engineering Department seeks applications for an Assistant Teaching Professor. Must have a Ph.D. in computer science and/or electrical or computer engineering and/or computing education and/or mathematics and/or bioinformatics or computational biology and/or computational social science. Website: apol-recruit.ucsd.edu/JPF04041 Email: nherrera@ucsd.edu

The University of California San Diego Computer Science and Engineering Department seeks applications for an Assistant Teaching Professor. Must have a Ph.D. in computer science and/or electrical or computer engineering and/or computing education and/or mathematics and/or bioinformatics or computational biology and/or computational social science.

Website: https://apol-recruit.ucsd.edu/JPF04041
Email: nherrera@ucsd.edu

By shacharlovett

Quantum fault-tolerance milestones dropping like atoms

from Scott Aaronson

Update: I’d been wavering—should I vote for the terrifying lunatic, ranting about trans criminal illegal aliens cooking cat meat, or for the nice woman constantly making faces as though the lunatic was completely cracking her up? But when the woman explicitly came out in favor of AI and quantum computing research … that really sealed […]

Update: I’d been wavering—should I vote for the terrifying lunatic, ranting about trans criminal illegal aliens cooking cat meat, or for the nice woman constantly making faces as though the lunatic was completely cracking her up? But when the woman explicitly came out in favor of AI and quantum computing research … that really sealed the deal for me.


Between roughly 2001 and 2018, I’ve happy to have done some nice things in quantum computing theory, from the quantum lower bound for the collision problem to the invention of shadow tomography.  I hope that’s not the end of it.  QC research brought me about as much pleasure as anything in life did.  So I hope my tired brain can be revved up a few more times, between now and whenever advances in AI or my failing health or the collapse of civilization makes the issue moot. If not, though, there are still many other quantum activities to fill my days: teaching (to which I’ve returned after two years), advising my students and postdocs, popular writing and podcasts and consulting, and of course, learning about the latest advances in quantum computing so I can share them with you, my loyal readers.

On that note, what a time it is in QC!  Basically, one experimental milestone after another that people talked about since the 90s is finally being achieved, to the point where it’s become hard to keep up with it all. Briefly though:

A couple weeks ago, the Google group announced an experiment that achieved net gain from the use of Kitaev’s surface code, using 101 physical qubits to encode 1 logical qubit. The headline result here is that, in line with theory, they see the performance improve as they pass to larger codes with more physical qubits and higher distance. Their best demonstrated code has a distance of 7, which is enough to get “beyond break-even” (their logical qubit lasts more than twice as long as the underlying physical qubits), and is also enough that any future improvements to the hardware will get amplified a lot. With superconducting qubits, one is (alas) still limited by how many one can cram onto a single chip. On paper, though, they say that scaling the same setup to a distance-27 code with ~1500 physical qubits would get them down to an error rate of 10-6, good enough to be a building block in a future fault-tolerant QC. They also report correlated bursts of errors that come about once per hour, from a still-unknown source that appears not to be cosmic rays. I hope it’s not Gil Kalai in the next room.

Separately, just this morning, Microsoft and Quantinuum announced that they entangled 12 logical qubits on a 56-physical-qubit trapped-ion processor, building on earlier work that I blogged about in April. They did this by applying a depth-3 logical circuit with 12 logical CNOT gates, to prepare a cat state. They report an 0.2% error rate when they do this, which is 11x better than they would’ve gotten without using error-correction. (Craig Gidney, in the comments, says that these results still involve postselection.)

The Microsoft/Quantinuum group also did what they called a “chemistry simulation” involving 13 physical qubits. The latter involved “only” 2 logical qubits and 4 logical gates, but 3 of those gates were non-Clifford, which are the hard kind when one is doing error-correction using a transversal code. (CNOT, by contrast, is a Clifford gate.)

Apart from the fact that Google is using superconducting qubits while Microsoft/Quantinuum are using trapped ions, the two results are incomparable in terms of what they demonstrate. Google is just scaling up a single logical qubit, but showing (crucially) that their error rate decreases with increasing size and distance. Microsoft and Quantinuum are sticking with “small” logical qubits with insufficient distance, but they’re showing that they can apply logical circuits that entangle up to 12 of these qubits.

Microsoft also announced today a new collaboration with the startup company Atom Computing, headquartered near Quantinuum in Colorado, which is trying to build neutral-atom QCs (like QuEra in Boston). Over the past few years, Microsoft’s quantum group has decisively switched from a strategy of “topological qubits or bust” to a strategy of “anything that works,” although they assure me that they also remain committed to the topological approach.

Anyway, happy to hear in the comments from anyone who knows more details, or wants to correct me on any particular, or has questions which I or others can try our best to answer.

Let me end by sticking my neck out. If hardware progress continues at the rate we’ve seen for the past year or two, then I find it hard to understand why we won’t have useful fault-tolerant QCs within the next decade. (And now to retreat my neck a bit: the “if” clause in that sentence is important and non-removable!)

By Scott

Functional modeling

from Ben Recht

The inevitability of convexity in local search

This is the live blog of Lecture 4 of my graduate class “Convex Optimization.” A Table of Contents is here.

Though I made a big deal last week about how we could reduce all optimization problems to minimizing a linear cost subject to constraints, most people think of cost functions as the central part of optimization. Even though it’s easier to build models by specifying constraints, the act of optimization is easier to conceptualize using functions. Optimization implies a notion of pricing and the existence of a solution of minimal price. Moreover, if I can formulate an optimization problem as finding some point that achieves minimum cost, I can search for such a problem by finding clever ways to reduce the cost at each step.

Since Newton, mathematicians have realized that dynamic search methods can find solutions of minimum value. The most popular search method, usually attributed to Cauchy, is the method of steepest descent. This very greedy method tells you to find the direction that maximally decreases your cost and follow that direction.

Steepest descent is something you learn in calculus. The steepest descent direction at any point is in the direction of the negative gradient. If the gradient is zero, there is no local information about how to make improvements. However, there are weird functions even in 1D, like x3, where the gradient equals zero at the point x=0, but any finite step in the negative direction reduces the cost. Convex functions are the ones for which Cauchy’s method always finds the minimum cost solution. And the proof almost follows from the definition.

A function is convex if and only if the plane tangent to the graph of the function lies below the function at every point:

.

Writing this in math says that for all x and all v

So if the gradient is equal to zero at x0, that means f(x)>= f(x0) for all x. If you get your definitions right, all theorems are trivial. 

Last week, we needed to talk about feasible cones and separating hyperplanes to test optimality. But the condition here for convex functions is so much simpler. A point is a global minimizer of a cost function if the gradient of the function at that point is equal to zero. 

If you can model your cost with convex functions, a simple algorithm always finds the optimal solution. The only question that remains is whether you can model your problem with convex costs. Just like last week, the answer will be to have a brute force definition that is sometimes easy to check and a bunch of rules for building complex convex functions from simple ones.

For this modeling program, we should start with the more common definition of convex functions: a function is convex if, for any two points, the line segment connecting their function values lies above the graph of the function. That is, a convex function is one where the area above the graph is a convex set. Here’s the simplest picture.

Convex functions are all boring like this. Their boringness is their appeal.

Even in high dimensions, convex functions are roughly valley-shaped, perhaps with some sharper edges. But the functions you can prove convex are wild. Feel free to skim these examples, but here are some of the more exotic ones I’ve seen (oddly, all of them introduced to me by Joel Tropp).

  • If H hermitian, X positive definite, f(X) = -trace(exp(H + log(X)) is convex. Here exp and log are the matrix exponential and logarithm.

  • If X and Z are positive definite, f(X,Z) = trace( X (log X - log Z) - (X - Z)) is convex.

  • If f is any complex differentiable function on the subset of complex numbers with real parts between a and b, f(x) = supy | f(x+iy) | is convex on the set [a,b].

Wut. Verifying the convexity of these particular examples is a real pain. The first one even has a name associated with it (Lieb’s Theorem). But for those of us who just want to model, it’s better to take the building block approach, again looking at operations that preserve convexity and using these to build up examples. 

Nonnegative Combinations. Any linear combination of convex functions with nonnegative coefficients is convex. You can prove this using the fact that inequalities are preserved under nonnegative combinations. This means that integrals of convex functions are convex, as are expected values.

Composition with Affine Functions. If f(x) is convex, A is a matrix, and b is a vector, then g(x) = f(Ax+b) is convex.

Maximization. The pointwise maximum of several convex functions is itself a convex function. In mathematics: f1, f2, …, fk are convex, then g(x) = maxi fi(x) is convex. 

Partial minimization. If f(x,z) is convex, then inf_z f(x,z) is convex. 

Composition with scalar functions. If f is convex, and g is convex and nondecreasing, then h(x) = g(f(x)) is convex.

With these simple rules, you can build up a giant library of convex functions. You could argue the only ones you really need are affine functions aT x + b.  If you just think about this geometrically, a convex function is equal to the pointwise maximum of all of the affine functions that lie below its graph. This is the same as saying that a convex set is the intersection of all halfspaces that contain it.

But such sophisticated mathematical abstractions aren’t necessary for modeling. Instead, we just need to composition rules. We can get some outlandish functions with the building blocks alone:

  • The sum of the squared residual errors in a linear model

  • The sum of the k largest components of a vector

  • The negative geometric mean of a bunch of concave functions

  • The Kullback Liebler divergence between two probability distributions

  • The maximum eigenvalue of a positive semidefinite matrix

All of these functions are convex and can be proven so using the simple composition rules. And that means that they can be minimized by local search. Disciplined modeling guarantees efficient ways to find global minimizers. The only question (and it’s a hard one) is whether you can find a good model for your problem only using these rules.

Subscribe now

By Ben Recht

Fully Characterizing Lossy Catalytic Computation

from arXiv: Computational Complexity

Authors: Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker

A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace ($CL$), can compute problems which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] = CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space ($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We show that $$LCSPACE[s,c,e] = CSPACE[\Theta(s + e \log c), \Theta(c)]$$ In other words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional $e \log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e \log n] \subseteq ZPP$, thus giving a barrier to any improvement beyond $LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and randomized catalytic space.

Authors: Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker

A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace ($CL$), can compute problems which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] = CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space ($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We show that $$LCSPACE[s,c,e] = CSPACE[\Theta(s + e \log c), \Theta(c)]$$ In other words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional $e \log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e \log n] \subseteq ZPP$, thus giving a barrier to any improvement beyond $LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and randomized catalytic space.

Nearest Neighbor CCP-Based Molecular Sequence Analysis

from arXiv: Computational Complexity

Authors: Sarwan Ali, Prakash Chourasia, Bipin Koirala, Murray Patterson

Molecular sequence analysis is crucial for comprehending several biological processes, including protein-protein interactions, functional annotation, and disease classification. The large number of sequences and the inherently complicated nature of protein structures make it challenging to analyze such data. Finding patterns and enhancing subsequent research requires the use of dimensionality reduction and feature selection approaches. Recently, a method called Correlated Clustering and Projection (CCP) has been proposed as an effective method for biological sequencing data. The CCP technique is still costly to compute even though it is effective for sequence visualization. Furthermore, its utility for classifying molecular sequences is still uncertain. To solve these two problems, we present a Nearest Neighbor Correlated Clustering and Projection (CCP-NN)-based technique for efficiently preprocessing molecular sequence data. To group related molecular sequences and produce representative supersequences, CCP makes use of sequence-to-sequence correlations. As opposed to conventional methods, CCP doesn't rely on matrix diagonalization, therefore it can be applied to a range of machine-learning problems. We estimate the density map and compute the correlation using a nearest-neighbor search technique. We performed molecular sequence classification using CCP and CCP-NN representations to assess the efficacy of our proposed approach. Our findings show that CCP-NN considerably improves classification task accuracy as well as significantly outperforms CCP in terms of computational runtime.

Authors: Sarwan Ali, Prakash Chourasia, Bipin Koirala, Murray Patterson

Molecular sequence analysis is crucial for comprehending several biological processes, including protein-protein interactions, functional annotation, and disease classification. The large number of sequences and the inherently complicated nature of protein structures make it challenging to analyze such data. Finding patterns and enhancing subsequent research requires the use of dimensionality reduction and feature selection approaches. Recently, a method called Correlated Clustering and Projection (CCP) has been proposed as an effective method for biological sequencing data. The CCP technique is still costly to compute even though it is effective for sequence visualization. Furthermore, its utility for classifying molecular sequences is still uncertain. To solve these two problems, we present a Nearest Neighbor Correlated Clustering and Projection (CCP-NN)-based technique for efficiently preprocessing molecular sequence data. To group related molecular sequences and produce representative supersequences, CCP makes use of sequence-to-sequence correlations. As opposed to conventional methods, CCP doesn't rely on matrix diagonalization, therefore it can be applied to a range of machine-learning problems. We estimate the density map and compute the correlation using a nearest-neighbor search technique. We performed molecular sequence classification using CCP and CCP-NN representations to assess the efficacy of our proposed approach. Our findings show that CCP-NN considerably improves classification task accuracy as well as significantly outperforms CCP in terms of computational runtime.

A Quantum Pigeonhole Principle and Two Semidefinite Relaxations of Communication Complexity

from arXiv: Computational Complexity

Authors: Pavel Dvořák, Bruno Loff, Suhail Sherif

We study semidefinite relaxations of $\Pi_1$ combinatorial statements. By relaxing the pigeonhole principle, we obtain a new "quantum" pigeonhole principle which is a stronger statement. By relaxing statements of the form "the communication complexity of $f$ is $> k$", we obtain new communication models, which we call "$\gamma_2$ communication" and "quantum-lab protocols". We prove, via an argument from proof complexity, that any natural model obtained by such a relaxation must solve all Karchmer--Wigderson games efficiently. However, the argument is not constructive, so we work to explicitly construct such protocols in these two models.

Authors: Pavel Dvořák, Bruno Loff, Suhail Sherif

We study semidefinite relaxations of $\Pi_1$ combinatorial statements. By relaxing the pigeonhole principle, we obtain a new "quantum" pigeonhole principle which is a stronger statement. By relaxing statements of the form "the communication complexity of $f$ is $> k$", we obtain new communication models, which we call "$\gamma_2$ communication" and "quantum-lab protocols". We prove, via an argument from proof complexity, that any natural model obtained by such a relaxation must solve all Karchmer--Wigderson games efficiently. However, the argument is not constructive, so we work to explicitly construct such protocols in these two models.

Two-Sided Lossless Expanders in the Unbalanced Setting

from arXiv: Computational Complexity

Authors: Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao

We present the first explicit construction of two-sided lossless expanders in the unbalanced setting (bipartite graphs that have many more nodes on the left than on the right). Prior to our work, all known explicit constructions in the unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact, two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless (non-bipartite) expander graphs with high degree and a free group action. As far as we know, this is the first explicit construction of lossless (non-bipartite) expanders with $N$ vertices and degree $\ll N$.

Authors: Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao

We present the first explicit construction of two-sided lossless expanders in the unbalanced setting (bipartite graphs that have many more nodes on the left than on the right). Prior to our work, all known explicit constructions in the unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact, two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless (non-bipartite) expander graphs with high degree and a free group action. As far as we know, this is the first explicit construction of lossless (non-bipartite) expanders with $N$ vertices and degree $\ll N$.

HyperSteiner: Computing Heuristic Hyperbolic Steiner Minimal Trees

from arXiv: Computational Geometry

Authors: Alejandro García-Castellanos, Aniss Aiman Medbouhi, Giovanni Luca Marchetti, Erik J. Bekkers, Danica Kragic

We propose HyperSteiner -- an efficient heuristic algorithm for computing Steiner minimal trees in the hyperbolic space. HyperSteiner extends the Euclidean Smith-Lee-Liebman algorithm, which is grounded in a divide-and-conquer approach involving the Delaunay triangulation. The central idea is rephrasing Steiner tree problems with three terminals as a system of equations in the Klein-Beltrami model. Motivated by the fact that hyperbolic geometry is well-suited for representing hierarchies, we explore applications to hierarchy discovery in data. Results show that HyperSteiner infers more realistic hierarchies than the Minimum Spanning Tree and is more scalable to large datasets than Neighbor Joining.

Authors: Alejandro García-Castellanos, Aniss Aiman Medbouhi, Giovanni Luca Marchetti, Erik J. Bekkers, Danica Kragic

We propose HyperSteiner -- an efficient heuristic algorithm for computing Steiner minimal trees in the hyperbolic space. HyperSteiner extends the Euclidean Smith-Lee-Liebman algorithm, which is grounded in a divide-and-conquer approach involving the Delaunay triangulation. The central idea is rephrasing Steiner tree problems with three terminals as a system of equations in the Klein-Beltrami model. Motivated by the fact that hyperbolic geometry is well-suited for representing hierarchies, we explore applications to hierarchy discovery in data. Results show that HyperSteiner infers more realistic hierarchies than the Minimum Spanning Tree and is more scalable to large datasets than Neighbor Joining.

Revisiting Accurate Geometry for Morse-Smale Complexes

from arXiv: Computational Geometry

Authors: Son Le Thanh, Michael Ankele, Tino Weinkauf

The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.

Authors: Son Le Thanh, Michael Ankele, Tino Weinkauf

The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.

COVID19-CBABM: A City-Based Agent Based Disease Spread Modeling Framework

from arXiv: Computational Geometry

Authors: Raunak Sarbajna, Karima Elgarroussi, Hoang D Vo, Jianyuan Ni, Christoph F. Eick

In response to the ongoing pandemic and health emergency of COVID-19, several models have been used to understand the dynamics of virus spread. Some employ mathematical models like the compartmental SEIHRD approach and others rely on agent-based modeling (ABM). In this paper, a new city-based agent-based modeling approach called COVID19-CBABM is introduced. It considers not only the transmission mechanism simulated by the SEHIRD compartments but also models people movements and their interactions with their surroundings, particularly their interactions at different types of Points of Interest (POI), such as supermarkets. Through the development of knowledge extraction procedures for Safegraph data, our approach simulates realistic conditions based on spatial patterns and infection conditions considering locations where people spend their time in a given city. Our model was implemented in Python using the Mesa-Geo framework. COVID19-CBABM is portable and can be easily extended by adding more complicated scenarios. Therefore, it is a useful tool to assist the government and health authorities in evaluating strategic decisions and actions efficiently against this epidemic, using the unique mobility patterns of each city.

Authors: Raunak Sarbajna, Karima Elgarroussi, Hoang D Vo, Jianyuan Ni, Christoph F. Eick

In response to the ongoing pandemic and health emergency of COVID-19, several models have been used to understand the dynamics of virus spread. Some employ mathematical models like the compartmental SEIHRD approach and others rely on agent-based modeling (ABM). In this paper, a new city-based agent-based modeling approach called COVID19-CBABM is introduced. It considers not only the transmission mechanism simulated by the SEHIRD compartments but also models people movements and their interactions with their surroundings, particularly their interactions at different types of Points of Interest (POI), such as supermarkets. Through the development of knowledge extraction procedures for Safegraph data, our approach simulates realistic conditions based on spatial patterns and infection conditions considering locations where people spend their time in a given city. Our model was implemented in Python using the Mesa-Geo framework. COVID19-CBABM is portable and can be easily extended by adding more complicated scenarios. Therefore, it is a useful tool to assist the government and health authorities in evaluating strategic decisions and actions efficiently against this epidemic, using the unique mobility patterns of each city.

Efficiently Learning Markov Random Fields from Dynamics

from arXiv: Data Structures and Algorithms

Authors: Jason Gaitonde, Ankur Moitra, Elchanan Mossel

An important task in high-dimensional statistics is learning the parameters or dependency structure of an undirected graphical model, or Markov random field (MRF). Much of the prior work on this problem assumes access to i.i.d. samples from the MRF distribution and state-of-the-art algorithms succeed using $n^{\Theta(k)}$ runtime, where $n$ is the dimension and $k$ is the order of the interactions. However, well-known reductions from the sparse parity with noise problem imply that given i.i.d. samples from a sparse, order-$k$ MRF, any learning algorithm likely requires $n^{\Omega(k)}$ time, impeding the potential for significant computational improvements. In this work, we demonstrate that these fundamental barriers for learning MRFs can surprisingly be completely circumvented when learning from natural, dynamical samples. We show that in bounded-degree MRFs, the dependency structure and parameters can be recovered using a trajectory of Glauber dynamics of length $O(n \log n)$ with runtime $O(n^2 \log n)$. The implicit constants depend only on the degree and non-degeneracy parameters of the model, but not the dimension $n$. In particular, learning MRFs from dynamics is $\textit{provably computationally easier}$ than learning from i.i.d. samples under standard hardness assumptions.

Authors: Jason Gaitonde, Ankur Moitra, Elchanan Mossel

An important task in high-dimensional statistics is learning the parameters or dependency structure of an undirected graphical model, or Markov random field (MRF). Much of the prior work on this problem assumes access to i.i.d. samples from the MRF distribution and state-of-the-art algorithms succeed using $n^{\Theta(k)}$ runtime, where $n$ is the dimension and $k$ is the order of the interactions. However, well-known reductions from the sparse parity with noise problem imply that given i.i.d. samples from a sparse, order-$k$ MRF, any learning algorithm likely requires $n^{\Omega(k)}$ time, impeding the potential for significant computational improvements. In this work, we demonstrate that these fundamental barriers for learning MRFs can surprisingly be completely circumvented when learning from natural, dynamical samples. We show that in bounded-degree MRFs, the dependency structure and parameters can be recovered using a trajectory of Glauber dynamics of length $O(n \log n)$ with runtime $O(n^2 \log n)$. The implicit constants depend only on the degree and non-degeneracy parameters of the model, but not the dimension $n$. In particular, learning MRFs from dynamics is $\textit{provably computationally easier}$ than learning from i.i.d. samples under standard hardness assumptions.

A Performance Bound for the Greedy Algorithm in a Generalized Class of String Optimization Problems

from arXiv: Data Structures and Algorithms

Authors: Brandon Van Over, Bowen Li, Edwin K. P. Chong, Ali Pezeshki

We present a simple performance bound for the greedy scheme in string optimization problems that obtains strong results. Our approach vastly generalizes the group of previously established greedy curvature bounds by Conforti and Cornu\'{e}jols (1984). We consider three constants, $\alpha_G$, $\alpha_G'$, and $\alpha_G''$ introduced by Conforti and Cornu\'{e}jols (1984), that are used in performance bounds of greedy schemes in submodular set optimization. We first generalize both of the $\alpha_G$ and $\alpha_G''$ bounds to string optimization problems in a manner that includes maximizing submodular set functions over matroids as a special case. We then derive a much simpler and computable bound that allows for applications to a far more general class of functions with string domains. We prove that our bound is superior to both the $\alpha_G$ and $\alpha_G''$ bounds and provide a counterexample to show that the $\alpha_G'$ bound is incorrect under the assumptions in Conforti and Cornu\'{e}jols (1984). We conclude with two applications. The first is an application of our result to sensor coverage problems. We demonstrate our performance bound in cases where the objective function is set submodular and string submodular. The second is an application to a social welfare maximization problem with black-box utility functions.

Authors: Brandon Van Over, Bowen Li, Edwin K. P. Chong, Ali Pezeshki

We present a simple performance bound for the greedy scheme in string optimization problems that obtains strong results. Our approach vastly generalizes the group of previously established greedy curvature bounds by Conforti and Cornu\'{e}jols (1984). We consider three constants, $\alpha_G$, $\alpha_G'$, and $\alpha_G''$ introduced by Conforti and Cornu\'{e}jols (1984), that are used in performance bounds of greedy schemes in submodular set optimization. We first generalize both of the $\alpha_G$ and $\alpha_G''$ bounds to string optimization problems in a manner that includes maximizing submodular set functions over matroids as a special case. We then derive a much simpler and computable bound that allows for applications to a far more general class of functions with string domains. We prove that our bound is superior to both the $\alpha_G$ and $\alpha_G''$ bounds and provide a counterexample to show that the $\alpha_G'$ bound is incorrect under the assumptions in Conforti and Cornu\'{e}jols (1984). We conclude with two applications. The first is an application of our result to sensor coverage problems. We demonstrate our performance bound in cases where the objective function is set submodular and string submodular. The second is an application to a social welfare maximization problem with black-box utility functions.

FPT approximations for Capacitated Sum of Radii and Diameters

from arXiv: Data Structures and Algorithms

Authors: Arnold Filtser, Ameet Gadekar

The Capacitated Sum of Radii problem involves partitioning a set of points $P$, where each point $p\in P$ has capacity $U_p$, into $k$ clusters that minimize the sum of cluster radii, such that the number of points in the cluster centered at point $p$ is at most $U_p$. We begin by showing that the problem is APX-hard, and that under gap-ETH there is no parameterized approximation scheme (FPT-AS). We then construct a $\approx5.83$-approximation algorithm in FPT time (improving a previous $\approx7.61$ approximation in FPT time). Our results also hold when the objective is a general monotone symmetric norm of radii. We also improve the approximation factors for the uniform capacity case, and for the closely related problem of Capacitated Sum of Diameters.

Authors: Arnold Filtser, Ameet Gadekar

The Capacitated Sum of Radii problem involves partitioning a set of points $P$, where each point $p\in P$ has capacity $U_p$, into $k$ clusters that minimize the sum of cluster radii, such that the number of points in the cluster centered at point $p$ is at most $U_p$. We begin by showing that the problem is APX-hard, and that under gap-ETH there is no parameterized approximation scheme (FPT-AS). We then construct a $\approx5.83$-approximation algorithm in FPT time (improving a previous $\approx7.61$ approximation in FPT time). Our results also hold when the objective is a general monotone symmetric norm of radii. We also improve the approximation factors for the uniform capacity case, and for the closely related problem of Capacitated Sum of Diameters.

Centralized Selection with Preferences in the Presence of Biases

from arXiv: Data Structures and Algorithms

Authors: L. Elisa Celis, Amit Kumar, Nisheeth K. Vishnoi, Andrew Xu

This paper considers the scenario in which there are multiple institutions, each with a limited capacity for candidates, and candidates, each with preferences over the institutions. A central entity evaluates the utility of each candidate to the institutions, and the goal is to select candidates for each institution in a way that maximizes utility while also considering the candidates' preferences. The paper focuses on the setting in which candidates are divided into multiple groups and the observed utilities of candidates in some groups are biased--systematically lower than their true utilities. The first result is that, in these biased settings, prior algorithms can lead to selections with sub-optimal true utility and significant discrepancies in the fraction of candidates from each group that get their preferred choices. Subsequently, an algorithm is presented along with proof that it produces selections that achieve near-optimal group fairness with respect to preferences while also nearly maximizing the true utility under distributional assumptions. Further, extensive empirical validation of these results in real-world and synthetic settings, in which the distributional assumptions may not hold, are presented.

Authors: L. Elisa Celis, Amit Kumar, Nisheeth K. Vishnoi, Andrew Xu

This paper considers the scenario in which there are multiple institutions, each with a limited capacity for candidates, and candidates, each with preferences over the institutions. A central entity evaluates the utility of each candidate to the institutions, and the goal is to select candidates for each institution in a way that maximizes utility while also considering the candidates' preferences. The paper focuses on the setting in which candidates are divided into multiple groups and the observed utilities of candidates in some groups are biased--systematically lower than their true utilities. The first result is that, in these biased settings, prior algorithms can lead to selections with sub-optimal true utility and significant discrepancies in the fraction of candidates from each group that get their preferred choices. Subsequently, an algorithm is presented along with proof that it produces selections that achieve near-optimal group fairness with respect to preferences while also nearly maximizing the true utility under distributional assumptions. Further, extensive empirical validation of these results in real-world and synthetic settings, in which the distributional assumptions may not hold, are presented.

A new approach to bipartite stable matching optimization

from arXiv: Data Structures and Algorithms

Authors: Tamás Fleiner, András Frank, Tamás Király

As a common generalization of previously solved optimization problems concerning bipartite stable matchings, we describe a strongly polynomial network flow based algorithm for computing $\ell$ disjoint stable matchings with minimum total cost. The major observation behind the approach is that stable matchings, as edge sets, can be represented as certain cuts of an associated directed graph. This allows us to use results on disjoint cuts directly to answer questions about disjoint stable matchings. We also provide a construction that represents stable matchings as maximum-size antichains in a partially ordered set (poset), which enables us to apply the theorems of Dilworth, Mirsky, Greene and Kleitman directly to stable matchings. Another consequence of these approaches is a min-max formula for the minimum number of stable matchings covering all stable edges.

Authors: Tamás Fleiner, András Frank, Tamás Király

As a common generalization of previously solved optimization problems concerning bipartite stable matchings, we describe a strongly polynomial network flow based algorithm for computing $\ell$ disjoint stable matchings with minimum total cost. The major observation behind the approach is that stable matchings, as edge sets, can be represented as certain cuts of an associated directed graph. This allows us to use results on disjoint cuts directly to answer questions about disjoint stable matchings. We also provide a construction that represents stable matchings as maximum-size antichains in a partially ordered set (poset), which enables us to apply the theorems of Dilworth, Mirsky, Greene and Kleitman directly to stable matchings. Another consequence of these approaches is a min-max formula for the minimum number of stable matchings covering all stable edges.

Adjacency Labeling Schemes for Small Classes

from arXiv: Data Structures and Algorithms

Authors: Édouard Bonnet, Julien Duron, John Sylvester, Viktor Zamaraev

A graph class admits an implicit representation if, for every positive integer $n$, its $n$-vertex graphs have a $O(\log n)$-bit (adjacency) labeling scheme, i.e., their vertices can be labeled by binary strings of length $O(\log n)$ such that the presence of an edge between any pair of vertices can be deduced solely from their labels. The famous Implicit Graph Conjecture posited that every hereditary (i.e., closed under taking induced subgraphs) factorial (i.e., containing $2^{O(n \log n)}$ $n$-vertex graphs) class admits an implicit representation. The conjecture was recently refuted [Hatami and Hatami, FOCS '22], and does not even hold among monotone (i.e., closed under taking subgraphs) factorial classes [Bonnet et al., ICALP '24]. However, monotone small (i.e., containing at most $n! c^n$ many $n$-vertex graphs for some constant $c$) classes do admit implicit representations. This motivates the Small Implicit Graph Conjecture: Every hereditary small class admits an $O(\log n)$-bit labeling scheme. We provide evidence supporting the Small Implicit Graph Conjecture. First, we show that every small weakly sparse (i.e., excluding some fixed bipartite complete graph as a subgraph) class has an implicit representation. This is a consequence of the following fact of independent interest proved in the paper: Every weakly sparse small class has bounded expansion (hence, in particular, bounded degeneracy). Second, we show that every hereditary small class admits an $O(\log^3 n)$-bit labeling scheme, which provides a substantial improvement of the best-known polynomial upper bound of $n^{1-\varepsilon}$ on the size of adjacency labeling schemes for such classes. This is a consequence of another fact of independent interest proved in the paper: Every small class has neighborhood complexity $O(n \log n)$.

Authors: Édouard Bonnet, Julien Duron, John Sylvester, Viktor Zamaraev

A graph class admits an implicit representation if, for every positive integer $n$, its $n$-vertex graphs have a $O(\log n)$-bit (adjacency) labeling scheme, i.e., their vertices can be labeled by binary strings of length $O(\log n)$ such that the presence of an edge between any pair of vertices can be deduced solely from their labels. The famous Implicit Graph Conjecture posited that every hereditary (i.e., closed under taking induced subgraphs) factorial (i.e., containing $2^{O(n \log n)}$ $n$-vertex graphs) class admits an implicit representation. The conjecture was recently refuted [Hatami and Hatami, FOCS '22], and does not even hold among monotone (i.e., closed under taking subgraphs) factorial classes [Bonnet et al., ICALP '24]. However, monotone small (i.e., containing at most $n! c^n$ many $n$-vertex graphs for some constant $c$) classes do admit implicit representations. This motivates the Small Implicit Graph Conjecture: Every hereditary small class admits an $O(\log n)$-bit labeling scheme. We provide evidence supporting the Small Implicit Graph Conjecture. First, we show that every small weakly sparse (i.e., excluding some fixed bipartite complete graph as a subgraph) class has an implicit representation. This is a consequence of the following fact of independent interest proved in the paper: Every weakly sparse small class has bounded expansion (hence, in particular, bounded degeneracy). Second, we show that every hereditary small class admits an $O(\log^3 n)$-bit labeling scheme, which provides a substantial improvement of the best-known polynomial upper bound of $n^{1-\varepsilon}$ on the size of adjacency labeling schemes for such classes. This is a consequence of another fact of independent interest proved in the paper: Every small class has neighborhood complexity $O(n \log n)$.

Subexponential Parameterized Algorithms for Hitting Subgraphs

from arXiv: Data Structures and Algorithms

Authors: Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, Meirav Zehavi

For a finite set $\mathcal{F}$ of graphs, the $\mathcal{F}$-Hitting problem aims to compute, for a given graph $G$ (taken from some graph class $\mathcal{G}$) of $n$ vertices (and $m$ edges) and a parameter $k\in\mathbb{N}$, a set $S$ of vertices in $G$ such that $|S|\leq k$ and $G-S$ does not contain any subgraph isomorphic to a graph in $\mathcal{F}$. As a generic problem, $\mathcal{F}$-Hitting subsumes many fundamental vertex-deletion problems that are well-studied in the literature. The $\mathcal{F}$-Hitting problem admits a simple branching algorithm with running time $2^{O(k)}\cdot n^{O(1)}$, while it cannot be solved in $2^{o(k)}\cdot n^{O(1)}$ time on general graphs assuming the ETH. In this paper, we establish a general framework to design subexponential parameterized algorithms for the $\mathcal{F}$-Hitting problem on a broad family of graph classes. Specifically, our framework yields algorithms that solve $\mathcal{F}$-Hitting with running time $2^{O(k^c)}\cdot n+O(m)$ for a constant $c<1$ on any graph class $\mathcal{G}$ that admits balanced separators whose size is (strongly) sublinear in the number of vertices and polynomial in the size of a maximum clique. Examples include all graph classes of polynomial expansion and many important classes of geometric intersection graphs. Our algorithms also apply to the \textit{weighted} version of $\mathcal{F}$-Hitting, where each vertex of $G$ has a weight and the goal is to compute the set $S$ with a minimum weight that satisfies the desired conditions. The core of our framework is an intricate subexponential branching algorithm that reduces an instance of $\mathcal{F}$-Hitting (on the aforementioned graph classes) to $2^{O(k^c)}$ general hitting-set instances, where the Gaifman graph of each instance has treewidth $O(k^c)$, for some constant $c<1$ depending on $\mathcal{F}$ and the graph class.

Authors: Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, Meirav Zehavi

For a finite set $\mathcal{F}$ of graphs, the $\mathcal{F}$-Hitting problem aims to compute, for a given graph $G$ (taken from some graph class $\mathcal{G}$) of $n$ vertices (and $m$ edges) and a parameter $k\in\mathbb{N}$, a set $S$ of vertices in $G$ such that $|S|\leq k$ and $G-S$ does not contain any subgraph isomorphic to a graph in $\mathcal{F}$. As a generic problem, $\mathcal{F}$-Hitting subsumes many fundamental vertex-deletion problems that are well-studied in the literature. The $\mathcal{F}$-Hitting problem admits a simple branching algorithm with running time $2^{O(k)}\cdot n^{O(1)}$, while it cannot be solved in $2^{o(k)}\cdot n^{O(1)}$ time on general graphs assuming the ETH. In this paper, we establish a general framework to design subexponential parameterized algorithms for the $\mathcal{F}$-Hitting problem on a broad family of graph classes. Specifically, our framework yields algorithms that solve $\mathcal{F}$-Hitting with running time $2^{O(k^c)}\cdot n+O(m)$ for a constant $c<1$ on any graph class $\mathcal{G}$ that admits balanced separators whose size is (strongly) sublinear in the number of vertices and polynomial in the size of a maximum clique. Examples include all graph classes of polynomial expansion and many important classes of geometric intersection graphs. Our algorithms also apply to the \textit{weighted} version of $\mathcal{F}$-Hitting, where each vertex of $G$ has a weight and the goal is to compute the set $S$ with a minimum weight that satisfies the desired conditions. The core of our framework is an intricate subexponential branching algorithm that reduces an instance of $\mathcal{F}$-Hitting (on the aforementioned graph classes) to $2^{O(k^c)}$ general hitting-set instances, where the Gaifman graph of each instance has treewidth $O(k^c)$, for some constant $c<1$ depending on $\mathcal{F}$ and the graph class.

Monday, September 09

TR24-137 | Implications of Better PRGs for Permutation Branching Programs | Dean Doron, William Hoza

from ECCC Papers

We study the challenge of derandomizing constant-width standard-order read-once branching programs (ROBPs). Let $c \in [1, 2)$ be any constant. We prove that if there are explicit pseudorandom generators (PRGs) for width-$6$ length-$n$ permutation ROBPs with error $1/n$ and seed length $\widetilde{O}(\log^c n)$, then there are explicit hitting set generators (HSGs) for width-$4$ length-$n$ ROBPs with threshold $1/\mathrm{polylog}(n)$ and seed length $\widetilde{O}(\log^c n)$. For context, there are known explicit PRGs that fool constant-width permutation ROBPs with error $\varepsilon$ and seed length $O(\log n \cdot \log(1/\varepsilon))$ (Koucký, Nimbhorkar, and Pudlák STOC 2011; De CCC 2011; Steinke ECCC 2012). When $\varepsilon = 1/n$, there are known constructions of *weighted* pseudorandom generators (WPRGs) that fool polynomial-width permutation ROBPs with seed length $\widetilde{O}(\log^{3/2} n)$ (Pyne and Vadhan CCC 2021; Chen, Hoza, Lyu, Tal, and Wu FOCS 2023; Chattopadhyay and Liao ITCS 2024), but *unweighted* PRGs with seed length $o(\log^2 n)$ remain elusive. Meanwhile, for width-$4$ ROBPs, there are no known explicit PRGs, WPRGs, or HSGs with seed length $o(\log^2 n)$. Our reduction can be divided into two parts. First, we show that explicit low-error PRGs for width-$6$ permutation ROBPs with seed length $\widetilde{O}(\log^c n)$ would imply explicit low-error PRGs for width-$3$ ROBPs with seed length $\widetilde{O}(\log^c n)$. This would improve Meka, Reingold, and Tal's PRG (STOC 2019), which has seed length $o(\log^2 n)$ only when the error parameter is relatively large. Second, we show that for any $w$, $n$, $s$, and $\varepsilon$, an explicit PRG for width-$w$ ROBPs with error $0.01/n$ and seed length $s$ would imply an explicit $\varepsilon$-HSG for width-$(w + 1)$ ROBPs with seed length $O(s + \log n \cdot \log(1/\varepsilon))$.

We study the challenge of derandomizing constant-width standard-order read-once branching programs (ROBPs). Let $c \in [1, 2)$ be any constant. We prove that if there are explicit pseudorandom generators (PRGs) for width-$6$ length-$n$ permutation ROBPs with error $1/n$ and seed length $\widetilde{O}(\log^c n)$, then there are explicit hitting set generators (HSGs) for width-$4$ length-$n$ ROBPs with threshold $1/\mathrm{polylog}(n)$ and seed length $\widetilde{O}(\log^c n)$. For context, there are known explicit PRGs that fool constant-width permutation ROBPs with error $\varepsilon$ and seed length $O(\log n \cdot \log(1/\varepsilon))$ (Koucký, Nimbhorkar, and Pudlák STOC 2011; De CCC 2011; Steinke ECCC 2012). When $\varepsilon = 1/n$, there are known constructions of *weighted* pseudorandom generators (WPRGs) that fool polynomial-width permutation ROBPs with seed length $\widetilde{O}(\log^{3/2} n)$ (Pyne and Vadhan CCC 2021; Chen, Hoza, Lyu, Tal, and Wu FOCS 2023; Chattopadhyay and Liao ITCS 2024), but *unweighted* PRGs with seed length $o(\log^2 n)$ remain elusive. Meanwhile, for width-$4$ ROBPs, there are no known explicit PRGs, WPRGs, or HSGs with seed length $o(\log^2 n)$. Our reduction can be divided into two parts. First, we show that explicit low-error PRGs for width-$6$ permutation ROBPs with seed length $\widetilde{O}(\log^c n)$ would imply explicit low-error PRGs for width-$3$ ROBPs with seed length $\widetilde{O}(\log^c n)$. This would improve Meka, Reingold, and Tal's PRG (STOC 2019), which has seed length $o(\log^2 n)$ only when the error parameter is relatively large. Second, we show that for any $w$, $n$, $s$, and $\varepsilon$, an explicit PRG for width-$w$ ROBPs with error $0.01/n$ and seed length $s$ would imply an explicit $\varepsilon$-HSG for width-$(w + 1)$ ROBPs with seed length $O(s + \log n \cdot \log(1/\varepsilon))$.

New for August 2024

from Property Testing Review

Apologies for the late post! After the relative silence of last month, we’re getting back to a moderate cadence of papers. Some distribution testing, quantum testing, learning and testing. We’ve also added a non-sublinear paper on distributions that should be of interest. And here’s the roundup. Efficient Testable Learning of General Halfspaces with Adversarial Label […]

Apologies for the late post! After the relative silence of last month, we’re getting back to a moderate cadence of papers. Some distribution testing, quantum testing, learning and testing. We’ve also added a non-sublinear paper on distributions that should be of interest. And here’s the roundup.

Efficient Testable Learning of General Halfspaces with Adversarial Label Noise by Ilias Diakonikolas, Daniel M. Kane, Sihan Liu, and Nikos Zarifis (arXiv). This paper is on the recent model of testable learning introduced by Rubinfeld-Vasiliyan. Consider learning halfspaces over the Gaussian distribution. We have sample access to an input function \(f:\mathbf{R}^d \to \{0,1\}\), where our aim is learn the closest halfspace to \(f\). The samples comes from some fixed, underlying distribution \(\mathcal{D}\). But it is often infeasible to validate this distributional assumption, even in the property testing framework. A tester-learner pair will try to test this assumption, and if it accepts, apply the learning algorithm. The guarantee is: if \(\mathcal{D}\) is indeed (say) Gaussian, then the learning algorithms works as desired. If the tester accepts and the learner outputs a hypothesis (say, halfspace) \(h\), then it is guaranteed that \(h\) is close to \(f\) according to \(\mathcal{D}\), even if \(\mathcal{D}\) is far from Gaussian. This last part makes the whole setup interesting; the distribution tester might fail to reject the distribution, but we can trust the output hypothesis! There have been many results on learning homogenous halfspaces under the Gaussian distribution. So the hypothesis class consists of halfspaces going through the origin. This paper is about general (inhomogenous) halfspaces. At first blush, this might seem like a simple issue; we’re just adding a constant term to the halfspace. But existing techniques break down, often because they’re solving an optimization problem of minimizing error around a band close the origin. This paper gives a careful reduction to the “nearly” homogeneous case, and gives the first polynomial sample tester-learner pair for this problem.

Tolerant testing stabilizer states by Srinivasan Arunachalam and Arkopal Dutt (arXiv). Let us start with a familiar classical problem, low degree testing. Given access to a function \(f:\{0,1\}^n \to \{0,1\}\), we wish to test if \(f\) is a quadratic polynomial (over \(\mathbb{F}_2\)). There exist testers based on the Gowers-norm: basically, compute various (constant dimensional) discrete derivatives and check they are consistent with a quadratic function. These discrete derivatives can be analyzed using Fourier analysis, and the main aim is show that a function that “locally” (in terms of derivatives) behaves like a quadratic is indeed globally close to being one. This method can be used to get tolerant testers for quadratic polynomials. This paper is on a quantum generalization of this testing result. The input is a qubit \(|\psi_f\rangle\), promised to be a “phase state”. A phase state has a Boolean function \(f\) embedded in it, because one can write a phase state as a linear combination of \(2^n\) “base” states, with the coefficients being Boolean. A “stabilizer state” is one where the function \(f\) is a quadratic (or so I believe, I’m probably exposing my ignorance of quantum mechanics). This paper uses the Gowers norm techniques to give the first quantum tolerant tester for stabilizer states.

Improved Bounds for High-Dimensional Equivalence and Product Testing using Subcube Queries by Tomer Adar, Eldar Fischer, and Amit Levi (arXiv). Consider distribution testing on high-dimensional data, so the domain is \(\{0,1\}^n\). A nice model for distribution tester is the subcube conditioning model of Bhattacharyya and Chakraborty. Suppose we fix any subset \(S \subseteq [n]\) coordinates with \(x_S\). We can generate a sample \(y\) from the distribution, conditioned on \(y_S = x_S\) (meaning, \(y\) agrees with \(x\) on \(S\)). The problem is to perform equivalence testing of distributions on this model. Previous results got \(O(n^2/\varepsilon^2)\) query algorithms, and this paper give a significantly improved algorithm making \(O(n/\varepsilon^2)\) queries. Interestingly, the algorithm only makes weaker queries. One distribution is only accessed by “marginal queries”. So, given \(x_S\) as before, but we only sample the marginal distribution over coordinate \(i\), conditioned on \(S\) being fixed as \(x_S\). (Hence, the output is a single bit. Also, we note that the other distribution is accessed by prefix queries, a weaker version of subcube queries.) These generalizations lead to more results, on testing equivalence in the interval query model, and for testing the property of product distributions. This paper also proves a \(\Omega(\sqrt{n}/\varepsilon^2)\) lower bound for testing product distributions.

Parallel Sampling via Counting by Nima Anari, Ruiquan Gao, and Aviad Rubinstein (arXiv). This isn’t a “typical sublinear algorithm” per se, but I think it is quite interesting to those of us who think about sublinearity, adaptivity, and distributions. This result has connections to the previous paper. Our aim is to sample from an unknown distribution \(\mu\) over \([q]^n\). We have access to “marginal queries” as described above. This problem appears in large language models, wherein neural nets are trained on various marginals (next word), but the output is a sentence (list of words). Observe there is a simple “\(O(n)\) round” algorithm. Without any fixing, query the marginal of the first coordinate. Fix the first coordinate, query the marginal of the second coordinate. Fix the first two coordinates, rinse and repeat. This requires \(n\) rounds with the marginal query. In this model, polynomially many marginal queries can be made in a single round, and the aim is to minimize the number of rounds (basically, bounding the adaptivity). This paper gives a \(\widetilde{O}(n^{2/3})\) round procedure for sampling, and shows an \(\Omega(n^{1/3})\) lower bound.

By Seshadhri

Deriving differential approximation results for $k\,$CSPs from combinatorial designs

from arXiv: Computational Complexity

Authors: Jean-François Culus, Sophie Toulouse

Traditionally, inapproximability results for $\mathsf{Max\,k\,CSP\!-\!q}$ have been established using balanced $t$-wise independent distributions, which are related to orthogonal arrays of strength $t$. We contribute to the field by exploring such combinatorial designs in the context of the differential approximation measure. First, we establish a connection between the average differential ratio and orthogonal arrays. We deduce a differential approximation ratio of $1/q^k$ on $(k +1)$-partite instances of $\mathsf{k\,CSP\!-\!q}$, $\Omega(1/n^{k/2})$ on Boolean instances, $\Omega(1/n)$ when $k =2$, and $\Omega(1/\nu^{k -\lceil\log_{p^\kappa} k\rceil})$ when $k\geq 3$ and $q\geq 3$, where $p^\kappa$ is the smallest prime power greater than $q$. Secondly, by considering pairs of arrays related to balanced $k$-wise independence, we establish a reduction from $\mathsf{k\,CSP\!-\!q}$ to $\mathsf{k\,CSP\!-\!k}$ (with $q>k$), with an expansion factor of $1/(q -k/2)^k$ on the differential approximation guarantee. This, combined with the result of Yuri Nesterov, implies a lower differential approximability bound of $0.429/(q -1)^2$ for $\mathsf{2\,CSP\!-\!q}$. Finally, we prove that every Hamming ball with radius $k$ provides a $\Omega(1/n^k)$-approximation of the instance diameter. Our work highlights the utility of combinatorial designs in establishing approximation results.

Authors: Jean-François Culus, Sophie Toulouse

Traditionally, inapproximability results for $\mathsf{Max\,k\,CSP\!-\!q}$ have been established using balanced $t$-wise independent distributions, which are related to orthogonal arrays of strength $t$. We contribute to the field by exploring such combinatorial designs in the context of the differential approximation measure. First, we establish a connection between the average differential ratio and orthogonal arrays. We deduce a differential approximation ratio of $1/q^k$ on $(k +1)$-partite instances of $\mathsf{k\,CSP\!-\!q}$, $\Omega(1/n^{k/2})$ on Boolean instances, $\Omega(1/n)$ when $k =2$, and $\Omega(1/\nu^{k -\lceil\log_{p^\kappa} k\rceil})$ when $k\geq 3$ and $q\geq 3$, where $p^\kappa$ is the smallest prime power greater than $q$. Secondly, by considering pairs of arrays related to balanced $k$-wise independence, we establish a reduction from $\mathsf{k\,CSP\!-\!q}$ to $\mathsf{k\,CSP\!-\!k}$ (with $q>k$), with an expansion factor of $1/(q -k/2)^k$ on the differential approximation guarantee. This, combined with the result of Yuri Nesterov, implies a lower differential approximability bound of $0.429/(q -1)^2$ for $\mathsf{2\,CSP\!-\!q}$. Finally, we prove that every Hamming ball with radius $k$ provides a $\Omega(1/n^k)$-approximation of the instance diameter. Our work highlights the utility of combinatorial designs in establishing approximation results.

Red-Blue Pebbling with Multiple Processors: Time, Communication and Memory Trade-offs

from arXiv: Computational Complexity

Authors: Toni Böhnlein, Pál András Papp, A. N. Yzelman

The well-studied red-blue pebble game models the execution of an arbitrary computational DAG by a single processor over a two-level memory hierarchy. We present a natural generalization to a multiprocessor setting where each processor has its own limited fast memory, and all processors share unlimited slow memory. To our knowledge, this is the first thorough study that combines pebbling and DAG scheduling problems, capturing the computation of general workloads on multiple processors with memory constraints and communication costs. Our pebbling model enables us to analyze trade-offs between workload balancing, communication and memory limitations, and it captures real-world factors such as superlinear speedups due to parallelization. Our results include upper and lower bounds on the pebbling cost, an analysis of a greedy pebbling strategy, and an extension of NP-hardness results for specific DAG classes from simpler models. For our main technical contribution, we show two inapproximability results that already hold for the long-standing problem of standard red-blue pebbling: (i) the optimal I/O cost cannot be approximated to any finite factor, and (ii) the optimal total cost (I/O+computation) can only be approximated to a limited constant factor, i.e., it does not allow for a polynomial-time approximation scheme. These results also carry over naturally to our multiprocessor pebbling model.

Authors: Toni Böhnlein, Pál András Papp, A. N. Yzelman

The well-studied red-blue pebble game models the execution of an arbitrary computational DAG by a single processor over a two-level memory hierarchy. We present a natural generalization to a multiprocessor setting where each processor has its own limited fast memory, and all processors share unlimited slow memory. To our knowledge, this is the first thorough study that combines pebbling and DAG scheduling problems, capturing the computation of general workloads on multiple processors with memory constraints and communication costs. Our pebbling model enables us to analyze trade-offs between workload balancing, communication and memory limitations, and it captures real-world factors such as superlinear speedups due to parallelization. Our results include upper and lower bounds on the pebbling cost, an analysis of a greedy pebbling strategy, and an extension of NP-hardness results for specific DAG classes from simpler models. For our main technical contribution, we show two inapproximability results that already hold for the long-standing problem of standard red-blue pebbling: (i) the optimal I/O cost cannot be approximated to any finite factor, and (ii) the optimal total cost (I/O+computation) can only be approximated to a limited constant factor, i.e., it does not allow for a polynomial-time approximation scheme. These results also carry over naturally to our multiprocessor pebbling model.

Morphing Planar Graph Drawings via Orthogonal Box Drawings

from arXiv: Computational Geometry

Authors: Therese Biedl, Anna Lubiw, Jack Spalding-Jamieson

We give an algorithm to morph planar graph drawings that achieves small grid size at the expense of allowing a constant number of bends on each edge. The input is an $n$-vertex planar graph and two planar straight-line drawings of the graph on an $O(n) \times O(n)$ grid. The planarity-preserving morph is composed of $O(n)$ linear morphs between successive pairs of drawings, each on an $O(n) \times O(n)$ grid with a constant number of bends per edge. The algorithm to compute the morph runs in $O(n^2)$ time on a word RAM model with standard arithmetic operations -- in particular no square roots or cube roots are required. The first step of the algorithm is to morph each input drawing to a planar orthogonal box drawing where vertices are represented by boxes and each edge is drawn as a horizontal or vertical segment. The second step is to morph between planar orthogonal box drawings. This is done by extending known techniques for morphing planar orthogonal drawings with point vertices.

Authors: Therese Biedl, Anna Lubiw, Jack Spalding-Jamieson

We give an algorithm to morph planar graph drawings that achieves small grid size at the expense of allowing a constant number of bends on each edge. The input is an $n$-vertex planar graph and two planar straight-line drawings of the graph on an $O(n) \times O(n)$ grid. The planarity-preserving morph is composed of $O(n)$ linear morphs between successive pairs of drawings, each on an $O(n) \times O(n)$ grid with a constant number of bends per edge. The algorithm to compute the morph runs in $O(n^2)$ time on a word RAM model with standard arithmetic operations -- in particular no square roots or cube roots are required. The first step of the algorithm is to morph each input drawing to a planar orthogonal box drawing where vertices are represented by boxes and each edge is drawn as a horizontal or vertical segment. The second step is to morph between planar orthogonal box drawings. This is done by extending known techniques for morphing planar orthogonal drawings with point vertices.

3D Data Long-Term Preservation in Cultural Heritage

from arXiv: Computational Geometry

Authors: Nicola Amico, Achille Felicetti

The report explores the challenges and strategies for preserving 3D digital data in cultural heritage. It discusses the issue of technological obsolescence, emphasising the need for ustainable storage solutions and ongoing data management strategies. Key topics include understanding technological obsolescence, the lifecycle of digital content, digital continuity, data management plans (DMP), FAIR principles, and the use of public repositories. The report also covers the importance of metadata in long-term digital preservation, including types of metadata and strategies for building valuable metadata. It examines the evolving standards and interoperability in 3D format preservation and the importance of managing metadata and paradata. The document provides a comprehensive overview of the challenges and solutions for preserving 3D cultural heritage data in the long term.

Authors: Nicola Amico, Achille Felicetti

The report explores the challenges and strategies for preserving 3D digital data in cultural heritage. It discusses the issue of technological obsolescence, emphasising the need for ustainable storage solutions and ongoing data management strategies. Key topics include understanding technological obsolescence, the lifecycle of digital content, digital continuity, data management plans (DMP), FAIR principles, and the use of public repositories. The report also covers the importance of metadata in long-term digital preservation, including types of metadata and strategies for building valuable metadata. It examines the evolving standards and interoperability in 3D format preservation and the importance of managing metadata and paradata. The document provides a comprehensive overview of the challenges and solutions for preserving 3D cultural heritage data in the long term.

Hardness of sampling for the anti-ferromagnetic Ising model on random graphs

from arXiv: Data Structures and Algorithms

Authors: Neng Huang, Will Perkins, Aaron Potechin

We prove a hardness of sampling result for the anti-ferromagnetic Ising model on random graphs of average degree $d$ for large constant $d$, proving that when the normalized inverse temperature satisfies $\beta>1$ (asymptotically corresponding to the condensation threshold), then w.h.p. over the random graph there is no stable sampling algorithm that can output a sample close in $W_2$ distance to the Gibbs measure. The results also apply to a fixed-magnetization version of the model, showing that there are no stable sampling algorithms for low but positive temperature max and min bisection distributions. These results show a gap in the tractability of search and sampling problems: while there are efficient algorithms to find near optimizers, stable sampling algorithms cannot access the Gibbs distribution concentrated on such solutions. Our techniques involve extensions of the interpolation technique relating behavior of the mean field Sherrington-Kirkpatrick model to behavior of Ising models on random graphs of average degree $d$ for large $d$. While previous interpolation arguments compared the free energies of the two models, our argument compares the average energies and average overlaps in the two models.

Authors: Neng Huang, Will Perkins, Aaron Potechin

We prove a hardness of sampling result for the anti-ferromagnetic Ising model on random graphs of average degree $d$ for large constant $d$, proving that when the normalized inverse temperature satisfies $\beta>1$ (asymptotically corresponding to the condensation threshold), then w.h.p. over the random graph there is no stable sampling algorithm that can output a sample close in $W_2$ distance to the Gibbs measure. The results also apply to a fixed-magnetization version of the model, showing that there are no stable sampling algorithms for low but positive temperature max and min bisection distributions. These results show a gap in the tractability of search and sampling problems: while there are efficient algorithms to find near optimizers, stable sampling algorithms cannot access the Gibbs distribution concentrated on such solutions. Our techniques involve extensions of the interpolation technique relating behavior of the mean field Sherrington-Kirkpatrick model to behavior of Ising models on random graphs of average degree $d$ for large $d$. While previous interpolation arguments compared the free energies of the two models, our argument compares the average energies and average overlaps in the two models.

Constant congestion linkages in polynomially strong digraphs in polynomial time

from arXiv: Data Structures and Algorithms

Authors: Raul Lopes, Ignasi Sau

Given integers $k,c > 0$, we say that a digraph $D$ is $(k,c)$-linked if for every pair of ordered sets $\{s_1, \ldots, s_k\}$ and $\{t_1, \ldots, t_k\}$ of vertices of $D$, there are $P_1, \ldots, P_k$ such that for $i \in [k]$ each $P_i$ is a path from $s_i$ to $t_i$ and every vertex of $D$ appears in at most $c$ of those paths. Thomassen [Combinatorica, 1991] showed that for every fixed $k \geq 2$ there is no integer $p$ such that every $p$-strong digraph is $(k,1)$-linked. Edwards et al. [ESA, 2017] showed that every digraph $D$ with directed treewidth at least some function $f(k)$ contains a large bramble of congestion $2$ and that every $(36k^3 + 2k)$-strong digraph containing a bramble of congestion $2$ and size roughly $188k^3$ is $(k,2)$-linked. Since the directed treewidth of a digraph has to be at least its strong connectivity, this implies that there is a function $L(k)$ such that every $L(k)$-strong digraph is $(k,2)$-linked. This result was improved by Campos et al. [ESA, 2023], who showed that any $k$-strong digraph containing a bramble of size at least $2k(c\cdot k -c + 2) + c(k-1)$ and congestion $c$ is $(k,c)$-linked. Regarding the bramble, although the given bound on $f(k)$ is very large, Masa\v{r}\'{\i}k et al. [SIDMA, 2022] showed that directed treewidth $\mathcal{O}(k^{48}\log^{13} k)$ suffices if the congestion is relaxed to $8$. We first show how to drop the dependence on $c$, for even $c$, on the size of the bramble that is needed in the work of Campos et al. [ESA, 2023]. Then, by making two local changes in the proof of Masa\v{r}\'{\i}k et al. [SIDMA, 2022] we show how to build in polynomial time a bramble of size $k$ and congestion $8$ assuming that a large obstruction to directed treewidth (namely, a path system) is given. Applying these results, we show that there is a polynomial function $g(k)$ such that every $g(k)$-strong digraph is $(k,8)$-linked.

Authors: Raul Lopes, Ignasi Sau

Given integers $k,c > 0$, we say that a digraph $D$ is $(k,c)$-linked if for every pair of ordered sets $\{s_1, \ldots, s_k\}$ and $\{t_1, \ldots, t_k\}$ of vertices of $D$, there are $P_1, \ldots, P_k$ such that for $i \in [k]$ each $P_i$ is a path from $s_i$ to $t_i$ and every vertex of $D$ appears in at most $c$ of those paths. Thomassen [Combinatorica, 1991] showed that for every fixed $k \geq 2$ there is no integer $p$ such that every $p$-strong digraph is $(k,1)$-linked. Edwards et al. [ESA, 2017] showed that every digraph $D$ with directed treewidth at least some function $f(k)$ contains a large bramble of congestion $2$ and that every $(36k^3 + 2k)$-strong digraph containing a bramble of congestion $2$ and size roughly $188k^3$ is $(k,2)$-linked. Since the directed treewidth of a digraph has to be at least its strong connectivity, this implies that there is a function $L(k)$ such that every $L(k)$-strong digraph is $(k,2)$-linked. This result was improved by Campos et al. [ESA, 2023], who showed that any $k$-strong digraph containing a bramble of size at least $2k(c\cdot k -c + 2) + c(k-1)$ and congestion $c$ is $(k,c)$-linked. Regarding the bramble, although the given bound on $f(k)$ is very large, Masa\v{r}\'{\i}k et al. [SIDMA, 2022] showed that directed treewidth $\mathcal{O}(k^{48}\log^{13} k)$ suffices if the congestion is relaxed to $8$. We first show how to drop the dependence on $c$, for even $c$, on the size of the bramble that is needed in the work of Campos et al. [ESA, 2023]. Then, by making two local changes in the proof of Masa\v{r}\'{\i}k et al. [SIDMA, 2022] we show how to build in polynomial time a bramble of size $k$ and congestion $8$ assuming that a large obstruction to directed treewidth (namely, a path system) is given. Applying these results, we show that there is a polynomial function $g(k)$ such that every $g(k)$-strong digraph is $(k,8)$-linked.

MATWA: A Web Toolkit for Matching under Preferences

from arXiv: Data Structures and Algorithms

Authors: Frederik Glitzner, David Manlove

Matching markets, where agents are assigned to one another based on preferences and capacity constraints, are pervasive in various domains. This paper introduces MATWA (matwa.optimalmatching.com), a web application offering a rich collection of algorithms for fundamental problem models involving matching under preferences. MATWA provides results and visualizations of matching algorithm outputs based on different methods for providing problem instances. In this paper, we describe the features of the system, illustrating its usage for different problem models, and outlining the algorithm implementations that are supported. We also give evidence of usability testing and illustrate how the system was used to obtain new empirical results for a specific matching problem. MATWA is intended to be a resource for the community of researchers in the area of matching under preferences, supporting experimentation as well as aiding the understanding of matching algorithms.

Authors: Frederik Glitzner, David Manlove

Matching markets, where agents are assigned to one another based on preferences and capacity constraints, are pervasive in various domains. This paper introduces MATWA (https://matwa.optimalmatching.com), a web application offering a rich collection of algorithms for fundamental problem models involving matching under preferences. MATWA provides results and visualizations of matching algorithm outputs based on different methods for providing problem instances. In this paper, we describe the features of the system, illustrating its usage for different problem models, and outlining the algorithm implementations that are supported. We also give evidence of usability testing and illustrate how the system was used to obtain new empirical results for a specific matching problem. MATWA is intended to be a resource for the community of researchers in the area of matching under preferences, supporting experimentation as well as aiding the understanding of matching algorithms.

Faster Sampling from Log-Concave Densities over Polytopes via Efficient Linear Solvers

from arXiv: Data Structures and Algorithms

Authors: Oren Mangoubi, Nisheeth K. Vishnoi

We consider the problem of sampling from a log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a polytope $K:=\{\theta \in \mathbb{R}^d: A\theta \leq b\}$, where $A\in \mathbb{R}^{m\times d}$ and $b \in \mathbb{R}^m$.The fastest-known algorithm \cite{mangoubi2022faster} for the setting when $f$ is $O(1)$-Lipschitz or $O(1)$-smooth runs in roughly $O(md \times md^{\omega -1})$ arithmetic operations, where the $md^{\omega -1}$ term arises because each Markov chain step requires computing a matrix inversion and determinant (here $\omega \approx 2.37$ is the matrix multiplication constant). We present a nearly-optimal implementation of this Markov chain with per-step complexity which is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same. The key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers that can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator.

Authors: Oren Mangoubi, Nisheeth K. Vishnoi

We consider the problem of sampling from a log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a polytope $K:=\{\theta \in \mathbb{R}^d: A\theta \leq b\}$, where $A\in \mathbb{R}^{m\times d}$ and $b \in \mathbb{R}^m$.The fastest-known algorithm \cite{mangoubi2022faster} for the setting when $f$ is $O(1)$-Lipschitz or $O(1)$-smooth runs in roughly $O(md \times md^{\omega -1})$ arithmetic operations, where the $md^{\omega -1}$ term arises because each Markov chain step requires computing a matrix inversion and determinant (here $\omega \approx 2.37$ is the matrix multiplication constant). We present a nearly-optimal implementation of this Markov chain with per-step complexity which is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same. The key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers that can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator.

FPT Algorithms using Minimal Parameters for a Generalized Version of Maximin Shares

from arXiv: Data Structures and Algorithms

Authors: Klaus Jansen, Alexandra Lassota, Malte Tutas, Adrian Vetta

We study the computational complexity of fairly allocating indivisible, mixed-manna items. For basic measures of fairness, this problem is hard in general. Thus, research has flourished concerning input classes where efficient algorithms exist, both for the purpose of establishing theoretical boundaries and for the purpose of designing practical algorithms for real-world instances. Notably, the paradigm of fixed-parameter tractability (FPT) has lead to new insights and improved algorithms for a variety of fair allocation problems; see, for example, Bleim et al. (IJCAI 16), Aziz et al. (AAAI 17), Bredereck et al. (EC 19) and Kulkarni et al. (EC 21). Our focus is the fairness measure maximin shares (MMS). Motivated by the general non-existence of MMS allocations, Aziz et al. (AAAI 17) studied optimal MMS allocations, namely solutions that achieve the best $\alpha$-approximation for the maximin share value of every agent. These allocations are guaranteed to exist, prompting the important open question of whether optimal MMS allocations can be computed efficiently. We answer this question affirmatively by providing FPT algorithms to output optimal MMS allocations. Furthermore, our techniques extend to find allocations that optimize alternative objectives, such as minimizing the additive approximation, and maximizing some variants of global welfare. In fact, all our algorithms are designed for a more general MMS problem in machine scheduling. Here, each mixed-manna item (job) must be assigned to an agent (machine) and has a processing time and a deadline. We develop efficient algorithms running in FPT time. Formally, we present polynomial time algorithms w.r.t. the input size times some function dependent on the parameters that yield optimal maximin-value approximations (among others) when parameterized by a set of natural parameters

Authors: Klaus Jansen, Alexandra Lassota, Malte Tutas, Adrian Vetta

We study the computational complexity of fairly allocating indivisible, mixed-manna items. For basic measures of fairness, this problem is hard in general. Thus, research has flourished concerning input classes where efficient algorithms exist, both for the purpose of establishing theoretical boundaries and for the purpose of designing practical algorithms for real-world instances. Notably, the paradigm of fixed-parameter tractability (FPT) has lead to new insights and improved algorithms for a variety of fair allocation problems; see, for example, Bleim et al. (IJCAI 16), Aziz et al. (AAAI 17), Bredereck et al. (EC 19) and Kulkarni et al. (EC 21). Our focus is the fairness measure maximin shares (MMS). Motivated by the general non-existence of MMS allocations, Aziz et al. (AAAI 17) studied optimal MMS allocations, namely solutions that achieve the best $\alpha$-approximation for the maximin share value of every agent. These allocations are guaranteed to exist, prompting the important open question of whether optimal MMS allocations can be computed efficiently. We answer this question affirmatively by providing FPT algorithms to output optimal MMS allocations. Furthermore, our techniques extend to find allocations that optimize alternative objectives, such as minimizing the additive approximation, and maximizing some variants of global welfare. In fact, all our algorithms are designed for a more general MMS problem in machine scheduling. Here, each mixed-manna item (job) must be assigned to an agent (machine) and has a processing time and a deadline. We develop efficient algorithms running in FPT time. Formally, we present polynomial time algorithms w.r.t. the input size times some function dependent on the parameters that yield optimal maximin-value approximations (among others) when parameterized by a set of natural parameters

Improving the Parameter Dependency for High-Multiplicity Scheduling on Uniform Machines

from arXiv: Data Structures and Algorithms

Authors: Klaus Jansen, Kai Kahler, Lis Pirotton, Malte Tutas

We address scheduling problems on uniform machines with high-multiplicity encoding, introducing a divide and conquer approach to assess the feasibility of a general Load Balancing Problem (LBP). Via reductions, our algorithm can also solve the more well-known problems $Q\|C_{\max}$ (makespan minimization), $Q\|C_{\min}$ (santa claus) and $Q\|C_{\text{envy}}$ (envy minimization). State-of-the-art algorithms for these problems, e.g. by Knop et al. (Math.\ Program.\ '23), have running times with parameter dependency $p_{\max}^{O(d^2)}$, where $p_{\max}$ is the largest processing time and $d$ is the number of different processing times. We partially answer the question asked by Kouteck\'y and Zink (ISAAC'20) about whether this quadratic dependency of $d$ can be improved to a linear one: Under the natural assumption that the machines are similar in a way that $s_{\max}/s_{\min} \leq p_{\max}^{O(1)}$ and $\tau\leq p_{\max}^{O(1)}$, our proposed algorithm achieves parameter dependency $p_{\max}^{O(d)}$ for the problems ${Q\|\{C_{\max},C_{\min},C_{\text{envy}}\}}$. Here, $\tau$ is the number of distinct machine speeds. Even without this assumption, our running times achieve a state-of-the-art parameter dependency and do so with an entirely different approach.

Authors: Klaus Jansen, Kai Kahler, Lis Pirotton, Malte Tutas

We address scheduling problems on uniform machines with high-multiplicity encoding, introducing a divide and conquer approach to assess the feasibility of a general Load Balancing Problem (LBP). Via reductions, our algorithm can also solve the more well-known problems $Q\|C_{\max}$ (makespan minimization), $Q\|C_{\min}$ (santa claus) and $Q\|C_{\text{envy}}$ (envy minimization). State-of-the-art algorithms for these problems, e.g. by Knop et al. (Math.\ Program.\ '23), have running times with parameter dependency $p_{\max}^{O(d^2)}$, where $p_{\max}$ is the largest processing time and $d$ is the number of different processing times. We partially answer the question asked by Kouteck\'y and Zink (ISAAC'20) about whether this quadratic dependency of $d$ can be improved to a linear one: Under the natural assumption that the machines are similar in a way that $s_{\max}/s_{\min} \leq p_{\max}^{O(1)}$ and $\tau\leq p_{\max}^{O(1)}$, our proposed algorithm achieves parameter dependency $p_{\max}^{O(d)}$ for the problems ${Q\|\{C_{\max},C_{\min},C_{\text{envy}}\}}$. Here, $\tau$ is the number of distinct machine speeds. Even without this assumption, our running times achieve a state-of-the-art parameter dependency and do so with an entirely different approach.

RCM++:Reverse Cuthill-McKee ordering with Bi-Criteria Node Finder

from arXiv: Data Structures and Algorithms

Authors: JiaJun Hou, HongJie Liu, ShengXin Zhu

The Reverse Cuthill-McKee (RCM) algorithm is a graph-based method for reordering sparse matrices, renowned for its effectiveness in minimizing matrix bandwidth and profile. This reordering enhances the efficiency of matrix operations, making RCM pivotal among reordering algorithms. In the context of executing the RCM algorithm, it is often necessary to select a starting node from the graph representation of the matrix. This selection allows the execution of BFS (Breadth-First Search) to construct the level structure. The choice of this starting node significantly impacts the algorithm's performance, necessitating a heuristic approach to identify an optimal starting node, commonly referred to as the RCM starting node problem. Techniques such as the minimum degree method and George-Liu (GL) algorithm are popular solutions. This paper introduces a novel algorithm addressing the RCM starting node problem by considering both the eccentricity and the width of the node during the run. Integrating this algorithm with the RCM algorithm, we introduce RCM++. Experimental results demonstrate that RCM++ outperforms existing RCM methods in major software libraries, achieving higher quality results with comparable computation time. This advancement fosters the further application and development of the RCM algorithm.The code related to this research has been made available at github.com/SStan1/RCM\_PP.git.

Authors: JiaJun Hou, HongJie Liu, ShengXin Zhu

The Reverse Cuthill-McKee (RCM) algorithm is a graph-based method for reordering sparse matrices, renowned for its effectiveness in minimizing matrix bandwidth and profile. This reordering enhances the efficiency of matrix operations, making RCM pivotal among reordering algorithms. In the context of executing the RCM algorithm, it is often necessary to select a starting node from the graph representation of the matrix. This selection allows the execution of BFS (Breadth-First Search) to construct the level structure. The choice of this starting node significantly impacts the algorithm's performance, necessitating a heuristic approach to identify an optimal starting node, commonly referred to as the RCM starting node problem. Techniques such as the minimum degree method and George-Liu (GL) algorithm are popular solutions. This paper introduces a novel algorithm addressing the RCM starting node problem by considering both the eccentricity and the width of the node during the run. Integrating this algorithm with the RCM algorithm, we introduce RCM++. Experimental results demonstrate that RCM++ outperforms existing RCM methods in major software libraries, achieving higher quality results with comparable computation time. This advancement fosters the further application and development of the RCM algorithm.The code related to this research has been made available at https://github.com/SStan1/RCM\_PP.git.

A Hybrid Vectorized Merge Sort on ARM NEON

from arXiv: Data Structures and Algorithms

Authors: Jincheng Zhou, Jin Zhang, Xiang Zhang, Tiaojie Xiao, Di Ma, Chunye Gong

Sorting algorithms are the most extensively researched topics in computer science and serve for numerous practical applications. Although various sorts have been proposed for efficiency, different architectures offer distinct flavors to the implementation of parallel sorting. In this paper, we propose a hybrid vectorized merge sort on ARM NEON, named NEON Merge Sort for short (NEON-MS). In detail, according to the granted register functions, we first identify the optimal register number to avoid the register-to-memory access, due to the write-back of intermediate outcomes. More importantly, following the generic merge sort framework that primarily uses sorting network for column sort and merging networks for three types of vectorized merge, we further improve their structures for high efficiency in an unified asymmetry way: 1) it makes the optimal sorting networks with few comparators become possible; 2) hybrid implementation of both serial and vectorized merges incurs the pipeline with merge instructions highly interleaved. Experiments on a single FT2000+ core show that NEON-MS is 3.8 and 2.1 times faster than std::sort and boost::block\_sort, respectively, on average. Additionally, as compared to the parallel version of the latter, NEON-MS gains an average speedup of 1.25.

Authors: Jincheng Zhou, Jin Zhang, Xiang Zhang, Tiaojie Xiao, Di Ma, Chunye Gong

Sorting algorithms are the most extensively researched topics in computer science and serve for numerous practical applications. Although various sorts have been proposed for efficiency, different architectures offer distinct flavors to the implementation of parallel sorting. In this paper, we propose a hybrid vectorized merge sort on ARM NEON, named NEON Merge Sort for short (NEON-MS). In detail, according to the granted register functions, we first identify the optimal register number to avoid the register-to-memory access, due to the write-back of intermediate outcomes. More importantly, following the generic merge sort framework that primarily uses sorting network for column sort and merging networks for three types of vectorized merge, we further improve their structures for high efficiency in an unified asymmetry way: 1) it makes the optimal sorting networks with few comparators become possible; 2) hybrid implementation of both serial and vectorized merges incurs the pipeline with merge instructions highly interleaved. Experiments on a single FT2000+ core show that NEON-MS is 3.8 and 2.1 times faster than std::sort and boost::block\_sort, respectively, on average. Additionally, as compared to the parallel version of the latter, NEON-MS gains an average speedup of 1.25.