Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed, or follow on Twitter, to stay up to date.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Powered by Pluto.

Theory of Computing Report

Monday, February 06

A Computational Separation Between Quantum No-cloning and No-teleportation

from arXiv: Computational Complexity

Authors: Barak Nehoran, Mark Zhandry

Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is clonable if and only if it is teleportable. Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and oracles relative to which these states are efficiently clonable but not efficiently teleportable. Given that the opposite scenario is impossible (states that can be teleported can always trivially be cloned), this gives the most complete oracle separation possible between these two important no-go properties. In doing so, we introduce a related quantum no-go property, reconstructibility, which refers to the ability to construct a quantum state from a uniquely identifying classical description. We show the stronger result of a collection of quantum states that are efficiently clonable but not efficiently reconstructible. This novel no-go property only exists in relation to computational efficiency, as it is trivial for unbounded computation. It thus opens up the possibility of further computational no-go properties that have not yet been studied because they do not exist outside the computational context.

Authors: Barak Nehoran, Mark Zhandry

Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is clonable if and only if it is teleportable. Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and oracles relative to which these states are efficiently clonable but not efficiently teleportable. Given that the opposite scenario is impossible (states that can be teleported can always trivially be cloned), this gives the most complete oracle separation possible between these two important no-go properties. In doing so, we introduce a related quantum no-go property, reconstructibility, which refers to the ability to construct a quantum state from a uniquely identifying classical description. We show the stronger result of a collection of quantum states that are efficiently clonable but not efficiently reconstructible. This novel no-go property only exists in relation to computational efficiency, as it is trivial for unbounded computation. It thus opens up the possibility of further computational no-go properties that have not yet been studied because they do not exist outside the computational context.

Complexity of Solo Chess with Unlimited Moves

from arXiv: Data Structures and Algorithms

Authors: Josh Brunner, Lily Chung, Michael Coulombe, Erik D. Demaine, Timothy Gomez, Jayson Lynch

We analyze Solo Chess puzzles, where the input is an $n \times n$ board containing some standard Chess pieces of the same color, and the goal is to make a sequence of capture moves to reduce down to a single piece. Prior work analyzes this puzzle for a single piece type when each piece is limited to make at most two capture moves (as in the Solo Chess puzzles on chess.com). By contrast, we study when each piece can make an unlimited number of capture moves. We show that any single piece type can be solved in polynomial time in a general model of piece types, while any two standard Chess piece types are NP-complete. We also analyze the restriction (as on chess.com) that one piece type is unique and must be the last surviving piece, showing that in this case some pairs of piece types become tractable while others remain hard.

Authors: Josh Brunner, Lily Chung, Michael Coulombe, Erik D. Demaine, Timothy Gomez, Jayson Lynch

We analyze Solo Chess puzzles, where the input is an $n \times n$ board containing some standard Chess pieces of the same color, and the goal is to make a sequence of capture moves to reduce down to a single piece. Prior work analyzes this puzzle for a single piece type when each piece is limited to make at most two capture moves (as in the Solo Chess puzzles on chess.com). By contrast, we study when each piece can make an unlimited number of capture moves. We show that any single piece type can be solved in polynomial time in a general model of piece types, while any two standard Chess piece types are NP-complete. We also analyze the restriction (as on chess.com) that one piece type is unique and must be the last surviving piece, showing that in this case some pairs of piece types become tractable while others remain hard.

Optimal Heaviest Induced Ancestors

from arXiv: Data Structures and Algorithms

Authors: Panagiotis Charalampopoulos, Bartłomiej Dudek, Paweł Gawrychowski, Karol Pokorski

We revisit the Heaviest Induced Ancestors (HIA) problem that was introduced by Gagie, Gawrychowski, and Nekrich [CCCG 2013] and has a number of applications in string algorithms. Let $T_1$ and $T_2$ be two rooted trees whose nodes have weights that are increasing in all root-to-leaf paths, and labels on the leaves, such that no two leaves of a tree have the same label. A pair of nodes $(u, v)\in T_1 \times T_2$ is \emph{induced} if and only if there is a label shared by leaf-descendants of $u$ and $v$. In an HIA query, given nodes $x \in T_1$ and $y \in T_2$, the goal is to find an induced pair of nodes $(u, v)$ of the maximum total weight such that $u$ is an ancestor of~$x$ and $v$ is an ancestor of $y$.

Let $n$ be the upper bound on the sizes of the two trees. It is known that no data structure of size $\tilde{\mathcal{O}}(n)$ can answer HIA queries in $o(\log n / \log \log n)$ time [Charalampopoulos, Gawrychowski, Pokorski; ICALP 2020]. This (unconditional) lower bound is a $\operatorname{polyloglog} n$ factor away from the query time of the fastest $\tilde{\mathcal{O}}(n)$-size data structure known to date for the HIA problem [Abedin, Hooshmand, Ganguly, Thankachan; Algorithmica 2022]. In this work, we resolve the query-time complexity of the HIA problem for the near-linear space regime by presenting a data structure that can be built in $\tilde{\mathcal{O}}(n)$ time and answers HIA queries in $\mathcal{O}(\log n/\log\log n)$ time. As a direct corollary, we obtain an $\tilde{\mathcal{O}}(n)$-size data structure that maintains the LCS of a static string and a dynamic string, both of length at most $n$, in time optimal for this space regime.

The main ingredients of our approach are fractional cascading and the utilization of an $\mathcal{O}(\log n/ \log\log n)$-depth tree decomposition.

Authors: Panagiotis Charalampopoulos, Bartłomiej Dudek, Paweł Gawrychowski, Karol Pokorski

We revisit the Heaviest Induced Ancestors (HIA) problem that was introduced by Gagie, Gawrychowski, and Nekrich [CCCG 2013] and has a number of applications in string algorithms. Let $T_1$ and $T_2$ be two rooted trees whose nodes have weights that are increasing in all root-to-leaf paths, and labels on the leaves, such that no two leaves of a tree have the same label. A pair of nodes $(u, v)\in T_1 \times T_2$ is \emph{induced} if and only if there is a label shared by leaf-descendants of $u$ and $v$. In an HIA query, given nodes $x \in T_1$ and $y \in T_2$, the goal is to find an induced pair of nodes $(u, v)$ of the maximum total weight such that $u$ is an ancestor of~$x$ and $v$ is an ancestor of $y$.

Let $n$ be the upper bound on the sizes of the two trees. It is known that no data structure of size $\tilde{\mathcal{O}}(n)$ can answer HIA queries in $o(\log n / \log \log n)$ time [Charalampopoulos, Gawrychowski, Pokorski; ICALP 2020]. This (unconditional) lower bound is a $\operatorname{polyloglog} n$ factor away from the query time of the fastest $\tilde{\mathcal{O}}(n)$-size data structure known to date for the HIA problem [Abedin, Hooshmand, Ganguly, Thankachan; Algorithmica 2022]. In this work, we resolve the query-time complexity of the HIA problem for the near-linear space regime by presenting a data structure that can be built in $\tilde{\mathcal{O}}(n)$ time and answers HIA queries in $\mathcal{O}(\log n/\log\log n)$ time. As a direct corollary, we obtain an $\tilde{\mathcal{O}}(n)$-size data structure that maintains the LCS of a static string and a dynamic string, both of length at most $n$, in time optimal for this space regime.

The main ingredients of our approach are fractional cascading and the utilization of an $\mathcal{O}(\log n/ \log\log n)$-depth tree decomposition.

Group Fairness in Non-monotone Submodular Maximization

from arXiv: Data Structures and Algorithms

Authors: Jing Yuan, Shaojie Tang

Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design \emph{fairness-aware} algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items.

Authors: Jing Yuan, Shaojie Tang

Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design \emph{fairness-aware} algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items.

Chaining of Maximal Exact Matches in Graphs

from arXiv: Data Structures and Algorithms

Authors: Nicola Rizzo, Manuel Cáceres, Veli Mäkinen

We study the problem of finding maximal exact matches (MEMs) between a query string $Q$ and a labeled directed acyclic graph (DAG) $G=(V,E,\ell)$ and subsequently co-linearly chaining these matches. We show that it suffices to compute MEMs between node labels and $Q$ (node MEMs) to encode full MEMs. Node MEMs can be computed in linear time and we show how to co-linearly chain them to solve the Longest Common Subsequence (LCS) problem between $Q$ and $G$. Our chaining algorithm is the first to consider a symmetric formulation of the chaining problem in graphs and runs in $O(k^2|V| + |E| + kN\log N)$ time, where $k$ is the width (minimum number of paths covering the nodes) of $G$, and $N$ is the number of node MEMs. We then consider the problem of finding MEMs when the input graph is an indexable elastic founder graph (subclass of labeled DAGs studied by Equi et al., Algorithmica 2022). For arbitrary input graphs, the problem cannot be solved in truly sub-quadratic time under SETH (Equi et al., ICALP 2019). We show that we can report all MEMs between $Q$ and an indexable elastic founder graph in time $O(nH^2 + m + M_\kappa)$, where $n$ is the total length of node labels, $H$ is the maximum number of nodes in a block of the graph, $m = |Q|$, and $M_\kappa$ is the number of MEMs of length at least $\kappa$.

Authors: Nicola Rizzo, Manuel Cáceres, Veli Mäkinen

We study the problem of finding maximal exact matches (MEMs) between a query string $Q$ and a labeled directed acyclic graph (DAG) $G=(V,E,\ell)$ and subsequently co-linearly chaining these matches. We show that it suffices to compute MEMs between node labels and $Q$ (node MEMs) to encode full MEMs. Node MEMs can be computed in linear time and we show how to co-linearly chain them to solve the Longest Common Subsequence (LCS) problem between $Q$ and $G$. Our chaining algorithm is the first to consider a symmetric formulation of the chaining problem in graphs and runs in $O(k^2|V| + |E| + kN\log N)$ time, where $k$ is the width (minimum number of paths covering the nodes) of $G$, and $N$ is the number of node MEMs. We then consider the problem of finding MEMs when the input graph is an indexable elastic founder graph (subclass of labeled DAGs studied by Equi et al., Algorithmica 2022). For arbitrary input graphs, the problem cannot be solved in truly sub-quadratic time under SETH (Equi et al., ICALP 2019). We show that we can report all MEMs between $Q$ and an indexable elastic founder graph in time $O(nH^2 + m + M_\kappa)$, where $n$ is the total length of node labels, $H$ is the maximum number of nodes in a block of the graph, $m = |Q|$, and $M_\kappa$ is the number of MEMs of length at least $\kappa$.

Online Ad Allocation with Predictions

from arXiv: Data Structures and Algorithms

Authors: Fabian Spaeh, Alina Ene

Display Ads and the generalized assignment problem are two well-studied online packing problems with important applications in ad allocation and other areas. In both problems, ad impressions arrive online and have to be allocated immediately to budget-constrained advertisers. Worst-case algorithms that achieve the ideal competitive ratio are known, but might act overly conservative given the predictable and usually tame nature of real-world input. Given this discrepancy, we develop an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. Our algorithm is based on the work of Feldman et al. (2009) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. We use a novel analysis to show that our algorithm is able to capitalize on a good prediction, while being robust against poor predictions. We experimentally evaluate our algorithm on synthetic and real-world data on a wide range of predictions. Our algorithm is consistently outperforming the worst-case algorithm without predictions.

Authors: Fabian Spaeh, Alina Ene

Display Ads and the generalized assignment problem are two well-studied online packing problems with important applications in ad allocation and other areas. In both problems, ad impressions arrive online and have to be allocated immediately to budget-constrained advertisers. Worst-case algorithms that achieve the ideal competitive ratio are known, but might act overly conservative given the predictable and usually tame nature of real-world input. Given this discrepancy, we develop an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. Our algorithm is based on the work of Feldman et al. (2009) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. We use a novel analysis to show that our algorithm is able to capitalize on a good prediction, while being robust against poor predictions. We experimentally evaluate our algorithm on synthetic and real-world data on a wide range of predictions. Our algorithm is consistently outperforming the worst-case algorithm without predictions.

Qubit-Efficient Randomized Quantum Algorithms for Linear Algebra

from arXiv: Data Structures and Algorithms

Authors: Samson Wang, Sam McArdle, Mario Berta

We propose a class of randomized quantum algorithms for the task of sampling from matrix functions, without the use of quantum block encodings or any other coherent oracle access to the matrix elements. As such, our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures. For $N\times N$ Hermitian matrices, the space cost is $\log(N)+1$ qubits and depending on the structure of the matrices, the gate complexity can be comparable to state-of-the-art methods that use quantum data structures of up to size $O(N^2)$, when considering equivalent end-to-end problems. Within our framework, we present a quantum linear system solver that allows one to sample properties of the solution vector, as well as an algorithm for sampling properties of ground states of Hamiltonians. As a concrete application, we combine these two sub-routines to present a scheme for calculating Green's functions of quantum many-body systems.

Authors: Samson Wang, Sam McArdle, Mario Berta

We propose a class of randomized quantum algorithms for the task of sampling from matrix functions, without the use of quantum block encodings or any other coherent oracle access to the matrix elements. As such, our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures. For $N\times N$ Hermitian matrices, the space cost is $\log(N)+1$ qubits and depending on the structure of the matrices, the gate complexity can be comparable to state-of-the-art methods that use quantum data structures of up to size $O(N^2)$, when considering equivalent end-to-end problems. Within our framework, we present a quantum linear system solver that allows one to sample properties of the solution vector, as well as an algorithm for sampling properties of ground states of Hamiltonians. As a concrete application, we combine these two sub-routines to present a scheme for calculating Green's functions of quantum many-body systems.

Sunday, February 05

Nathan Never strikes back

from Emanuele Viola

Amiga Germany Fan’zine – #5 features Nathan Never, 30 years after the game came out (the article is from some months ago, but I just found out). You can read the article here. My first day of high school I was looking for a desk, and the one that was left was occupied by Marco […]

Amiga Germany Fan’zine – #5 features Nathan Never, 30 years after the game came out (the article is from some months ago, but I just found out). You can read the article here.

My first day of high school I was looking for a desk, and the one that was left was occupied by Marco Genovesi (also see previous post about Black Viper, our other game together, also featured in article). We had both programmed a little the mythical C64. I had written in basic a fighting game inspired by Hokuto No Ken. It was hand-coded pixellated sprites doing flying martial arts kicks. It was on a cassette — I didn’t have a floppy disk. I would love to see it again, but it seems lost forever. It was my first game, written sometime when I was between 10 and 12.

If I think back at my schools, from elementary to high, I scream in horror. There was nothing. No play-yard, no props, nothing. Even the walls! How much does it cost to hang a world map? Instead they were solid color. We were 8-1 or something, every day, including Saturdays, in a room with solid color walls, solid color desks, and a blackboard, and that’s it. Perhaps no wonder that I exercised my art and craft on what I had. I dismembered floppy disks, cassettes, speakers etc. and stuck them on the wall. I scribbled all over my desk.

At some point I got an Amiga 1000 (my trajectory was Spectrum 48K, Commodore 64, Amiga 1000). Naturally I wanted to program, but had zero leads. Unlike today, it was very hard to get started. Interestingly, people more experienced than me had the same complaint, like “when I was 13 myself good luck finding an assembler.” Actually, I think things will continue to move in this direction. Even today, learning something isn’t that easy, and one can imagine how in 20 years you’ll have tutorials precisely tailored to your background, instead of having to zap through youtube to find the small step you’re missing.

So for a while I was stuck fiddling with startup sequences and shell commands. I remember my parents puzzled at why I wasn’t doing anything fancy like on the C64.
“You’re still at popcli” (a type of shell, known back then as CLI, command line interface) they told me once — I am cursed with good memory.
My reply was “You need a program to write a program.” They looked puzzled. (And so was I, after all, I didn’t need one on the C64.)

My turning point occurred when I did one of my stupid things. I was balancing myself on the top beam of a swing. Naturally, I fell, didn’t land right, and injured my ankle. I was stuck in my room for a month, and one day Marco showed up and brought me the Amiga hardware reference manual, a book we had overheard was critical. I thanked him and I remember him saying: “I am doing it for me.” This book became my bible. I thrashed the stupid C compilers, got an assembler, and started “bashing metal” and having fun.

Then there was the store. I don’t remember how I ended up there. It sold games, so maybe I was just there to buy games. But I never bought a new game — only pirate — so the explanation doesn’t stand. Anyway, we were there and the person there was a techno-musician who knew people who wanted to make a game on Nathan Never, a pretty high-profile graphic novel series in Italy.

To this day I am shocked they didn’t find anyone else, since as I said it’s pretty high-profile. Maybe the programmer wanted too much money, or ditched them last moment and they needed the game fast. Who knows. I had just barely started programming on Amiga, but… I know blitter, I know copper; I am obviously ready for a major project.

So we signed the contract, and I insisted on some money upfront which I got. A few Ks. The development of the game was covered in games magazines, which other people in our high school were devouring. They were shocked to find out the team was us! Waking around, we would pass by a store and find the box of the game on display.

As part of my deal I managed to get an Amiga 3000 tower, a one-of-a-kind machine, which for a kid like me was a dream. Before that, I didn’t have enough memory to play the game from the assembler with the graphics loaded, so when I tested it I would see dots instead of sprites. Marco, who did the graphics, at the beginning didn’t have a color monitor, and would pick colors from the hex code, quite cool. I brought the 3000T to our vacation that summer, and instead of going to the sea (which I hated) I stayed indoor to program. Naturally, I spent 50% (easily) of that time programming a script that would compile an animation through a landscape of fractal mountains. The final animation was quite cool.

I wish the game was more fun to play. It isn’t really my fault: no experience, no time, and crucially the people who imposed the game design on us were not players but exclusively concerned with showing the main character, whose frames occupy most of the memory. More than the game they cared about the graphic novel that came with it in the box. Of all the people buzzing around the game, I was the only one who actually played computer games (I still do). Still, of course, I could have improved the gameplay. Instead, on an ultra-tight deadline, I decided to embark on a 3D “escape from the tunnel” last level, something they didn’t ask for (they were happy with an animation, which would have cost me zero effort) and that probably almost noone has seen, since to get there you need to beat two grueling platform levels a puzzle level, with one life and no save state. (The 3D level starts at 20:09 in this long play.) And this is what it means to be 14.

On the bright side, however, perhaps the game has its unique charm, and today it can be played on a hard disk through the emulator, even though the game was not designed to be installed on a hard disk. And this is great because playing the game from the disks is really painful, despite or in part because of a crazy color hack to make a 3D “change disk” message (another uncalled-for feature). And it’s a good feeling that after 30+ years some people are still watching and playing the game.

By Manu

Artificial Intelligence Just Lost a Leader

from Richard Lipton

Plus a position-search announcement from NSF Roger Schank just passed away. Roger was a top leader of AI. I overlapped with him for my time at Yale. In 1974, he became a professor of computer science and psychology at Yale University. In 1981, Schank became Chairman of Computer Science at Yale and director of the […]


Plus a position-search announcement from NSF

Roger Schank just passed away. Roger was a top leader of AI. I overlapped with him for my time at Yale. In 1974, he became a professor of computer science and psychology at Yale University. In 1981, Schank became Chairman of Computer Science at Yale and director of the Yale Artificial Intelligence Project. I was gone by then off to Berkeley.

John Brockman wrote: He is a computer scientist and cognitive psychologist who has worked in the AI field for twenty-five years. Like Marvin Minsky, he takes the strong AI view, but rather than trying to build an intelligent machine he wants to deconstruct the human mind. He wants to know, in particular, how natural language— one’s mother tongue — is processed, how memory works, and how learning occurs. Schank thinks of the human mind as a learning device, and he thinks that it is being taught in the wrong way. He is something of a gadfly; he deplores the curriculum-based, drill-oriented methods in today’s schools, and his most recent contributions have been in the area of education, looking at ways to use computers to enhance the learning process.

Roger’s View

Roger was a scary guy—especially for junior theory faculty like me. I did get along with Roger and he did support me when I was up for tenure. But he was scary no matter what. Part of the issue was that Roger did not think much of any research that was not AI based. Theory was not a top issue in his view.

Beat Roger Out

At Yale once Schank walked into Alan Perlis’s office and told Alan that he had thought about some post-doc position we had funding for. Schank began to explain to Perlis that he, Roger, had a couple of ideas of who he planned to hire to fill this position—of course the position was his. Perlis waved him off and said,

Lipton already has found a candidate and the position is gone.

I was not there, so all this is second-hand from Perlis, but I can imagine that Roger was not pleased—in his mind all resources of the department should have gone to build up the AI group. Roger was initially upset, but after this event he acted differently toward me. He treated me with more respect—in general theory was not his type of research. I always thought that he respected someone who “beat” him at the resource game, since he was such a strong player. I probably should not do it again, but doing it once was cool.

Years later, after Roger had moved to Northwestern University, he tried hard to hire me. Perhaps I should have gone. Oh well.

Open Problems

We want to announce this from Dilma Da Silva:

The Division Director, Computing and Communication Foundations (CCF) of NSF: NSF/CCF is looking for an IPA (rotator) Program Director (PD) for the Algorithm Foundation cluster. The job posting for the IPA PD position is available at here. My colleague Tracy Kimbrel and I will be happy to address any questions that potential applicants may have.

[sourced Brockman quote, added subtitle for NSF announcement]

By rjlipton

TR23-008 | Limits of structures and Total NP Search Problems | Ond?ej Ježil

from ECCC Papers

For a class of finite graphs, we define a limit object relative to some computationally restricted class of functions. The properties of the limit object then reflect how a computationally restricted viewer "sees" a generic instance from the class. The construction uses Krají?ek's forcing with random variables [7]. We prove sufficient conditions for universal and existential sentences to be valid in the limit, provide several examples, and prove that such a limit object can then be expanded to a model of weak arithmetic. We then take the limit of all finite pointed paths to obtain a model of arithmetic where the problem OntoWeakPigeon is total but Leaf (the complete problem for $\textbf{PPA}$) is not. This can be viewed as a logical separation of the oracle classes of total NP search problems, which in our setting implies standard nonreducibility of Leaf to OntoWeakPigeon.
For a class of finite graphs, we define a limit object relative to some computationally restricted class of functions. The properties of the limit object then reflect how a computationally restricted viewer "sees" a generic instance from the class. The construction uses Krají?ek's forcing with random variables [7]. We prove sufficient conditions for universal and existential sentences to be valid in the limit, provide several examples, and prove that such a limit object can then be expanded to a model of weak arithmetic. We then take the limit of all finite pointed paths to obtain a model of arithmetic where the problem OntoWeakPigeon is total but Leaf (the complete problem for $\textbf{PPA}$) is not. This can be viewed as a logical separation of the oracle classes of total NP search problems, which in our setting implies standard nonreducibility of Leaf to OntoWeakPigeon.

Saturday, February 04

Thursday, Feb 9th, 2023 — Amit Chakrabarti from Dartmouth College

from Foundation of Data Science - Virtual Talk Series

The next Foundations of Data Science virtual talk series on recent advances in adversarially robust streaming will take place on Thursday, February 9th at 2:00 PM Pacific Time (17:00 Eastern Time, 23:00 Central European Time, 22:00 UTC). Amit Chakrabarti from Dartmouth College will talk about “How to color your adversary’s graph” Details of the talkContinue reading "Thursday, Feb 9th, 2023 — Amit Chakrabarti from Dartmouth College"

The next Foundations of Data Science virtual talk series on recent advances in adversarially robust streaming will take place on Thursday, February 9th at 2:00 PM Pacific Time (17:00 Eastern Time, 23:00 Central European Time, 22:00 UTC). Amit Chakrabarti from Dartmouth College will talk about “How to color your adversary’s graph”

Details of the talk (Zoom link) are available here.

An n-vertex graph with maximum degree D is (D+1)-colorable: an almost trivial combinatorial result, with an equally simple greedy algorithm to produce a (D+1)-coloring. However, given a stream of edges of such a graph, can we maintain a valid (D+1)-coloring as the edges arrive, while using not much more than O(n) space? What if the edges are chosen by an adversary who can look at our current coloring and add additional edges to try to confound us? This is the newly-popular setting of adversarially robust streaming algorithms and this talk is about the coloring problem in this setting.

We obtain upper and lower bound results for this problem. In O(n polylog n) space, we can maintain an O(D^(5/2))-coloring of such an adversarial graph. On the other hand, every adversarially robust coloring algorithm under the same space limitation must spend Omega(D^2) colors. We in fact prove more general. results that trade off the space usage against the color budget.  One interesting by-product of our work is that in combination with the celebrated Assadi-Chen-Khanna algorithm (SODA 2019), it provides the first separation between randomized and deterministic algorithms for the (ordinary, non-robust) streaming graph coloring problem.

Based on joint works [C.-Ghosh-Stoeckl] and [Assadi-C.-Ghosh-Stoeckl].

 The series is supported by the NSF HDR TRIPODS Grant 1934846.

By dstheory

Friday, February 03

Postdoc at IST Austria (apply by March 15, 2023)

from CCI: jobs

Postdoctorial positions at IST Austria are available funded by the ERC Advanced Grant “Modern Dynamic Data Structures, led by Prof. Monika Henzinger Website: ist.ac.at/en/job/postdoc-research-group-monika-henzinger/ Email: monika.henzinger@ista.ac.at

Postdoctorial positions at IST Austria are available funded by the ERC Advanced Grant “Modern Dynamic Data Structures, led by Prof. Monika Henzinger

Website: https://ist.ac.at/en/job/postdoc-research-group-monika-henzinger/
Email: monika.henzinger@ista.ac.at

By shacharlovett

TR23-007 | Extended Nullstellensatz proof systems | Jan Krajicek

from ECCC Papers

For a finite set $\cal F$ of polynomials over a fixed finite prime field of size $p$ containing all polynomials $x^2 - x$, a Nullstellensatz proof of the unsolvability of the system $$ f = 0\ ,\ \mbox{ all } f \in {\cal F} $$ is a linear combination $\sum_{f \in {\cal F}} \ h_f \cdot f$ that equals to $1$ in the ring of polynomials. The measure of complexity of such a proof is its degree: $\max_f deg(h_f f)$. We study the problem to establish degree lower bounds for some {\em extended} NS proof systems: these systems prove the unsolvability of $\cal F$ by proving the unsolvability of a bigger set ${\cal F}\cup {\cal E}$, where set $\cal E$ may use more variables $r$ and contains polynomials $r^p - r$ for them, and satisfies the following soundness condition: -- Any $0,1$-assignment ${\overline a}$ to variables ${\overline x}$ can be appended by an assignment ${\overline b}$ to variables $\overline r$ such that for all $g \in {\cal E}$ it holds that $g(\overline a, \overline b) = 0$. We define a notion of pseudo-solutions of $\cal F$ and prove that the existence of pseudo-solutions with suitable parameters implies lower bounds for two extended NS proof systems ENS and UENS defined in Buss et al. (1996/97). Further we give a combinatorial example of $\cal F$ and candidate pseudo-solutions based on the pigeonhole principle.
For a finite set $\cal F$ of polynomials over a fixed finite prime field of size $p$ containing all polynomials $x^2 - x$, a Nullstellensatz proof of the unsolvability of the system $$ f = 0\ ,\ \mbox{ all } f \in {\cal F} $$ is a linear combination $\sum_{f \in {\cal F}} \ h_f \cdot f$ that equals to $1$ in the ring of polynomials. The measure of complexity of such a proof is its degree: $\max_f deg(h_f f)$. We study the problem to establish degree lower bounds for some {\em extended} NS proof systems: these systems prove the unsolvability of $\cal F$ by proving the unsolvability of a bigger set ${\cal F}\cup {\cal E}$, where set $\cal E$ may use more variables $r$ and contains polynomials $r^p - r$ for them, and satisfies the following soundness condition: -- Any $0,1$-assignment ${\overline a}$ to variables ${\overline x}$ can be appended by an assignment ${\overline b}$ to variables $\overline r$ such that for all $g \in {\cal E}$ it holds that $g(\overline a, \overline b) = 0$. We define a notion of pseudo-solutions of $\cal F$ and prove that the existence of pseudo-solutions with suitable parameters implies lower bounds for two extended NS proof systems ENS and UENS defined in Buss et al. (1996/97). Further we give a combinatorial example of $\cal F$ and candidate pseudo-solutions based on the pigeonhole principle.

This Game Is Not Going To Analyze Itself

from arXiv: Computational Complexity

Authors: Aviv Adler, Joshua Ani, Lily Chung, Michael Coulombe, Erik D. Demaine, Yevhenii Diomidov, Dylan Hendrickson, Jayson Lynch

We analyze the puzzle video game This Game Is Not Going To Load Itself, where the player routes data packets of three different colors from given sources to given sinks of the correct color. Given the sources, sinks, and some previously placed arrow tiles, we prove that the game is in Sigma_2^P; in NP for sources of equal period; NP-complete for three colors and six equal-period sources with player input; and even without player input, simulating the game is both NP- and coNP-hard for two colors and many sources with different periods. On the other hand, we characterize which locations for three data sinks admit a perfect placement of arrow tiles that guarantee correct routing no matter the placement of the data sources, effectively solving most instances of the game as it is normally played.

Authors: Aviv Adler, Joshua Ani, Lily Chung, Michael Coulombe, Erik D. Demaine, Yevhenii Diomidov, Dylan Hendrickson, Jayson Lynch

We analyze the puzzle video game This Game Is Not Going To Load Itself, where the player routes data packets of three different colors from given sources to given sinks of the correct color. Given the sources, sinks, and some previously placed arrow tiles, we prove that the game is in Sigma_2^P; in NP for sources of equal period; NP-complete for three colors and six equal-period sources with player input; and even without player input, simulating the game is both NP- and coNP-hard for two colors and many sources with different periods. On the other hand, we characterize which locations for three data sinks admit a perfect placement of arrow tiles that guarantee correct routing no matter the placement of the data sources, effectively solving most instances of the game as it is normally played.

Order-Preserving Squares in Strings

from arXiv: Data Structures and Algorithms

Authors: Paweł Gawrychowski, Samah Ghazawi, Gad M. Landau

An order-preserving square in a string is a fragment of the form $uv$ where $u\neq v$ and $u$ is order-isomorphic to $v$. We show that a string $w$ of length $n$ over an alphabet of size $\sigma$ contains $\mathcal{O}(\sigma n)$ order-preserving squares that are distinct as words. This improves the upper bound of $\mathcal{O}(\sigma^{2}n)$ by Kociumaka, Radoszewski, Rytter, and Wale\'n [TCS 2016]. Further, for every $\sigma$ and $n$ we exhibit a string with $\Omega(\sigma n)$ order-preserving squares that are distinct as words, thus establishing that our upper bound is asymptotically tight. Finally, we design an $\mathcal{O}(\sigma n)$ time algorithm that outputs all order-preserving squares that occur in a given string and are distinct as words. By our lower bound, this is optimal in the worst case.

Authors: Paweł Gawrychowski, Samah Ghazawi, Gad M. Landau

An order-preserving square in a string is a fragment of the form $uv$ where $u\neq v$ and $u$ is order-isomorphic to $v$. We show that a string $w$ of length $n$ over an alphabet of size $\sigma$ contains $\mathcal{O}(\sigma n)$ order-preserving squares that are distinct as words. This improves the upper bound of $\mathcal{O}(\sigma^{2}n)$ by Kociumaka, Radoszewski, Rytter, and Wale\'n [TCS 2016]. Further, for every $\sigma$ and $n$ we exhibit a string with $\Omega(\sigma n)$ order-preserving squares that are distinct as words, thus establishing that our upper bound is asymptotically tight. Finally, we design an $\mathcal{O}(\sigma n)$ time algorithm that outputs all order-preserving squares that occur in a given string and are distinct as words. By our lower bound, this is optimal in the worst case.

A Universal Technique for Machine-Certified Proofs of Linearizable Algorithms

from arXiv: Data Structures and Algorithms

Authors: Prasad Jayanti, Siddhartha Jayanti, Ugur Y. Yavuz, Lizzie Hernandez

Linearizability has been the long standing gold standard for consistency in concurrent data structures. However, proofs of linearizability can be long and intricate, hard to produce, and extremely time consuming even to verify. In this work, we address this issue by introducing simple $universal$, $sound$, and $complete$ proof methods for producing machine-verifiable proofs of linearizability and its close cousin, strong linearizability. Universality means that our method works for any object type; soundness means that an algorithm can be proved correct by our method only if it is linearizable (resp. strong linearizable); and completeness means that any linearizable (resp. strong linearizable) implementation can be proved so using our method. We demonstrate the simplicity and power of our method by producing proofs of linearizability for the Herlihy-Wing queue and Jayanti's single-scanner snapshot, as well as a proof of strong linearizability of the Jayanti-Tarjan union-find object. All three of these proofs are machine-verified by TLAPS (the Temporal Logic of Actions Proof System).

Authors: Prasad Jayanti, Siddhartha Jayanti, Ugur Y. Yavuz, Lizzie Hernandez

Linearizability has been the long standing gold standard for consistency in concurrent data structures. However, proofs of linearizability can be long and intricate, hard to produce, and extremely time consuming even to verify. In this work, we address this issue by introducing simple $universal$, $sound$, and $complete$ proof methods for producing machine-verifiable proofs of linearizability and its close cousin, strong linearizability. Universality means that our method works for any object type; soundness means that an algorithm can be proved correct by our method only if it is linearizable (resp. strong linearizable); and completeness means that any linearizable (resp. strong linearizable) implementation can be proved so using our method. We demonstrate the simplicity and power of our method by producing proofs of linearizability for the Herlihy-Wing queue and Jayanti's single-scanner snapshot, as well as a proof of strong linearizability of the Jayanti-Tarjan union-find object. All three of these proofs are machine-verified by TLAPS (the Temporal Logic of Actions Proof System).

Constant RMR Recoverable Mutex under System-wide Crashes

from arXiv: Data Structures and Algorithms

Authors: Prasad Jayanti, Siddhartha Jayanti, Anup Joshi

We design two Recoverable Mutual Exclusion (RME) locks for the system-wide crash model. Our first algorithm requires only $O(1)$ space per process, and achieves $O(1)$ worst-case RMR complexity in the CC model. Our second algorithm enhances the first algorithm to achieve (the same) $O(1)$ space per process and $O(1)$ worst-case RMR complexity in both the CC and DSM models. Furthermore, both algorithms allow dynamically created threads of arbitrary names to join the protocol and access the locks. To our knowledge, these are the only RME locks to achieve worst-case $O(1)$ RMR complexity assuming nothing more than standard hardware support. In light of Chan and Woelfel's $\Omega(\log n / \log\log n)$ worst-case RMR lower bound for RME in the individual crash model, our results show a separation between the system-wide crash and individual crash models in worst-case RMR complexity in both the CC and DSM models.

Authors: Prasad Jayanti, Siddhartha Jayanti, Anup Joshi

We design two Recoverable Mutual Exclusion (RME) locks for the system-wide crash model. Our first algorithm requires only $O(1)$ space per process, and achieves $O(1)$ worst-case RMR complexity in the CC model. Our second algorithm enhances the first algorithm to achieve (the same) $O(1)$ space per process and $O(1)$ worst-case RMR complexity in both the CC and DSM models. Furthermore, both algorithms allow dynamically created threads of arbitrary names to join the protocol and access the locks. To our knowledge, these are the only RME locks to achieve worst-case $O(1)$ RMR complexity assuming nothing more than standard hardware support. In light of Chan and Woelfel's $\Omega(\log n / \log\log n)$ worst-case RMR lower bound for RME in the individual crash model, our results show a separation between the system-wide crash and individual crash models in worst-case RMR complexity in both the CC and DSM models.

Rethinking Warm-Starts with Predictions: Learning Predictions Close to Sets of Optimal Solutions for Faster $\text{L}$-/$\text{L}^\natural$-Convex Function Minimization

from arXiv: Data Structures and Algorithms

Authors: Shinsaku Sakaue, Taihei Oki

An emerging line of work has shown that machine-learned predictions are useful to warm-start algorithms for discrete optimization problems, such as bipartite matching. Previous studies have shown time complexity bounds proportional to some distance between a prediction and an optimal solution, which we can approximately minimize by learning predictions from past optimal solutions. However, such guarantees may not be meaningful when multiple optimal solutions exist. Indeed, the dual problem of bipartite matching and, more generally, $\text{L}$-/$\text{L}^\natural$-convex function minimization have arbitrarily many optimal solutions, making such prediction-dependent bounds arbitrarily large. To resolve this theoretically critical issue, we present a new warm-start-with-prediction framework for $\text{L}$-/$\text{L}^\natural$-convex function minimization. Our framework offers time complexity bounds proportional to the distance between a prediction and the set of all optimal solutions. The main technical difficulty lies in learning predictions that are provably close to sets of all optimal solutions, for which we present an online-gradient-descent-based method. We thus give the first polynomial-time learnability of predictions that can provably warm-start algorithms regardless of multiple optimal solutions.

Authors: Shinsaku Sakaue, Taihei Oki

An emerging line of work has shown that machine-learned predictions are useful to warm-start algorithms for discrete optimization problems, such as bipartite matching. Previous studies have shown time complexity bounds proportional to some distance between a prediction and an optimal solution, which we can approximately minimize by learning predictions from past optimal solutions. However, such guarantees may not be meaningful when multiple optimal solutions exist. Indeed, the dual problem of bipartite matching and, more generally, $\text{L}$-/$\text{L}^\natural$-convex function minimization have arbitrarily many optimal solutions, making such prediction-dependent bounds arbitrarily large. To resolve this theoretically critical issue, we present a new warm-start-with-prediction framework for $\text{L}$-/$\text{L}^\natural$-convex function minimization. Our framework offers time complexity bounds proportional to the distance between a prediction and the set of all optimal solutions. The main technical difficulty lies in learning predictions that are provably close to sets of all optimal solutions, for which we present an online-gradient-descent-based method. We thus give the first polynomial-time learnability of predictions that can provably warm-start algorithms regardless of multiple optimal solutions.

Speed-Oblivious Online Scheduling: Knowing (Precise) Speeds is not Necessary

from arXiv: Data Structures and Algorithms

Authors: Alexander Lindermayr, Nicole Megow, Martin Rapp

We consider online scheduling on unrelated (heterogeneous) machines in a speed-oblivious setting, where an algorithm is unaware of the exact job-dependent processing speeds. We show strong impossibility results for clairvoyant and non-clairvoyant algorithms and overcome them in models inspired by practical settings: (i) we provide competitive learning-augmented algorithms, assuming that (possibly erroneous) predictions on the speeds are given, and (ii) we provide competitive algorithms for the speed-ordered model, where a single global order of machines according to their unknown job-dependent speeds is known. We prove strong theoretical guarantees and evaluate our findings on a representative heterogeneous multi-core processor. These seem to be the first empirical results for algorithms with predictions that are performed in a non-synthetic environment on real hardware.

Authors: Alexander Lindermayr, Nicole Megow, Martin Rapp

We consider online scheduling on unrelated (heterogeneous) machines in a speed-oblivious setting, where an algorithm is unaware of the exact job-dependent processing speeds. We show strong impossibility results for clairvoyant and non-clairvoyant algorithms and overcome them in models inspired by practical settings: (i) we provide competitive learning-augmented algorithms, assuming that (possibly erroneous) predictions on the speeds are given, and (ii) we provide competitive algorithms for the speed-ordered model, where a single global order of machines according to their unknown job-dependent speeds is known. We prove strong theoretical guarantees and evaluate our findings on a representative heterogeneous multi-core processor. These seem to be the first empirical results for algorithms with predictions that are performed in a non-synthetic environment on real hardware.

Thursday, February 02

Postdoc at Centre for Quantum Computer Science, University of Latvia (apply by February 20, 2023)

from CCI: jobs

Applications are invited for a postdoctoral position at the Centre for Quantum Computer Science, University of Latvia (www.quantum.lu.lv), lead by prof. Andris Ambainis. The Centre is among the strongest European research groups in quantum algorithms and currently consists of 18 researchers (including postdocs and graduate students). Website: quantum.lu.lv/join-us/ Email: andris.ambainis@lu.lv

Applications are invited for a postdoctoral position at the Centre for Quantum Computer Science, University of Latvia (https://www.quantum.lu.lv), lead by prof. Andris Ambainis. The Centre is among the strongest European research groups in quantum algorithms and currently consists of 18 researchers (including postdocs and graduate students).

Website: https://quantum.lu.lv/join-us/
Email: andris.ambainis@lu.lv

By shacharlovett

Non-convex quadratic optimization problems

from Francis Bach

Among continuous optimization problems, convex problems (with convex objectives and convex constraints) define a class that can be solved efficiently with a variety of algorithms and with arbitrary precision. This is not true more generally when the convexity assumption is removed (see this post). This of course does not mean that (1) nobody should attempt...

Among continuous optimization problems, convex problems (with convex objectives and convex constraints) define a class that can be solved efficiently with a variety of algorithms and with arbitrary precision. This is not true more generally when the convexity assumption is removed (see this post). This of course does not mean that (1) nobody should attempt to solve high-dimensional non-convex problems (in fact, the spell checker run on this document was trained solving such a problem…), and that (2) no other problems have efficient solutions.

In this post, we will look at a classical class of continuous optimization problems that can be solved efficiently, namely quadratic optimization problems on the Euclidean sphere or ball. That is, we look at solving $$\tag{1} \min_{ \| x\| \leqslant 1} \ \frac{1}{2} x^\top A x \ – b^\top x, $$ and $$\tag{2} \min_{ \| x\| = 1} \ \frac{1}{2} x^\top A x \ – b^\top x, $$ for \(\|x\|^2 = x^\top x\) the standard squared Euclidean norm.

The matrix \(A \in \mathbb{R}^{n \times n}\) is assumed only symmetric (no need to be positive semi-definite), and \(b \in \mathbb{R}^n\). Therefore, the objective may not be convex, and the constraint set (in the case of the sphere), is not convex either. We could replace the standard squared Euclidean norm \(x^\top x\) by any Mahanalobis squared norm \(x^\top B x\) for \(B\) positive-definite, but to keep it simple, let’s only consider \(B = I\).

Note that there are other continuous non-convex optimization problems that can be solved efficiently through a convex reformulation, such as the minimization of one-dimensional (trigonometric) polynomials, and more generally sum-of-squares problems (see this post). If you are aware of many more beyond combinatorial optimization problems, please let me know.

Special case of eigenvalues. If \(b=0\) (no linear term), then the solution of Problem \((2)\) is the eigenvector associated with the smallest eigenvalue of \(A\), while the solution of Problem \((1)\) is the same eigenvector if the smallest eigenvalue of \(A\) is negative, and zero otherwise. This has a large number of applications, is the topic of tons of “spectral relaxation” works, and will not be the focus of this post.

Applications

Problems \((1)\) and \((2)\) appear in several areas and first appeared in [2] in 1965 (if you know of an earlier reference, please let me know). The three main occurrences that I am aware of are in trust-region methods, constrained eigenvalue problems, and relaxation of binary optimization problems.

Trust-region methods. When minimizing a differentiable function \(f\) on \(\mathbb{R}^n\), the classical gradient descent iteration $$x^+ = x \, – \gamma f(x)$$ can be seen as the solution of $$\min_{y \in \mathbb{R}^n} \ f(x) + f'(x)^\top ( y\, – x ) \mbox{ such that } \| y\, – x\| \leqslant \delta,$$ for \(\delta = \gamma \| f'(x)\|_2\). This corresponds to minimizing the first-order Taylor expansion in a ball centered at \(x\), and leads to the minimization of an affine function on an Euclidean ball. When using the second order model, we get to solve $$\min_{y \in \mathbb{R}^n} \ f(x) + f'(x)^\top ( y \, – x ) + \frac{1}{2} ( y\, -x )^\top f^{\prime \prime}(x) ( y \, -x ) \mbox{ such that } \| y \, – x\| \leqslant \delta, $$ which can be cast as Problem \((1)\).

The intuitive idea is that the Taylor expansion is only local, so we optimize it locally, instead of globally, like the classical Newton method would do. Moreover, it is well-defined even for singular Hessians. See figure below and [4] for more details.

We consider two quadratic functions and we compare the minimization of their linear and quadratic Taylor expansions under a ball constraint with increasing radius. Left: convex function (hence the quadratic minimization ends up reaching the global minimum). Right: function with a saddle. Below, only the quadratic minimization path is shown for several starting points.

Constrained eigenvalue problems. If we aim to minimize \(x^\top A x\) subject to \(x^\top x = 1\) and an affine constraint [3], then, by writing the affine constraint as \(x = Cz+d\), we obtain the minimization of a quadratic-linear function subject to a quadratic-linear constraint, which we can rewrite in a form similar to Problem \((2)\).

Relaxation of binary optimization problems. When minimizing a linear-quadratic function on \(\{-1,1\}^n\), we can relax it by replacing the constraint \(x \in \{-1,1\}^n\) by \(x^\top x = n\).

From local to global optimality conditions

Let’s now look at optimality conditions from first principles (see [5, 6] for more details), before relating them to a broader discussion on tight semi-definite relaxations.

Existence of minimizers. Minimizers always exists for \(f(x) = \frac{1}{2} x^\top A x \ – b^\top x\) since the two sets \(\mathbb{S} = \{ x \in \mathbb{R}^n, x^\top x = 1\}\) and \(\mathbb{B} = \{ x \in \mathbb{R}^n, x^\top x \leqslant 1\}\) are compact. Therefore, the problems are well-formulated.

First-order necessary conditions on the sphere. We consider an optimal \(x \in \mathbb{S}\). For any \(y \in \mathbb{S}\) which is orthogonal to \(x\), and any \(\theta \in \mathbb{R}\), we have: $$f( \cos \theta \cdot x + \sin \theta \cdot y) = f(x) + f'(x)^\top y \cdot \theta + o(\theta).$$ Thus, since \(\cos \theta \cdot x + \sin \theta \cdot y\) is always on \(\mathbb{S}\), we must have \(f'(x)^\top y=0\), and this holds for all \(y\) orthogonal to \(x\). Thus \(f'(x)\) has to be proportional to \(x\), that is, there exists \(\mu \in \mathbb{R}\) such that \(f'(x) + \mu x = 0\), that is, \((A + \mu I) x = b\).

First-order necessary conditions on the ball. If \(x \in \mathbb{B}\) is optimal and in the interior, that is, \(x^\top x < 1\), then we directly have \(f'(x) = 0\). If \(x \in \mathbb{S}\), it has to be optimal for the sphere, and thus there exists \(\mu \in \mathbb{R}\) such that \(f'(x) + \mu x = 0\). By considering that \(g: t \mapsto f(t x)\) has to be minimized on \([0,1]\), for \(t=1\), we must have \(g'(1) \leqslant 0\), i.e., \(\mu = \, – f'(x) ^\top x \geqslant 0\).

In order to cover the interior case, we need to add the “complementary slackness” condition \(\mu ( 1 -x^\top x)=0\).

Obtaining necessary conditions from Lagrange duality. We can obtain the same first-order optimality conditions using Lagrange duality, by adding a Lagrange multiplier \(\mu \in \mathbb{R}\) for the equality constraint \(x^\top x = 1\), or \(\mu \in \mathbb{R}_+\) for the inequality constraint \(x^\top x \leqslant 1\), and forming the Lagrangian $$\tag{3} \mathcal{L}(x,\mu) = \frac{1}{2} x^\top A x\, – b^\top x + \frac{1}{2} \mu ( x^\top x\, – 1).$$ A necessary condition is thus that the partial derivative with respect to \(x\) is zero for a certain \(\mu\), which is exactly the condition \(f'(x) + \mu x = 0\) above.

Second-order conditions on the sphere. Assuming that \(f'(x) + \mu x = 0\), with \(\mu\) potentially negative (i.e., the first-order optimality conditions are satisfied), we then have, for any \(y \in \mathbb{S}\), $$\begin{array}{rcl}f(y) & = & f(x) + f'(x)^\top(y-x) + \frac{1}{2}(x-y)^\top A ( x-y) \\ & = & f(x) + \frac{1}{2}(x-y)^\top ( A + \mu I) ( x-y) + \frac{\mu}{2} ( x^\top x – y^\top y). \end{array}$$ Thus, if \(x\) is optimal, we must have \((x-y)^\top ( A + \mu I) ( x-y) \geqslant 0\) for all \(y \in \mathbb{S}\), which implies that \(A+ \mu I \succcurlyeq 0\). Note that our reasoning implies that the optimality condition, that is, existence of \(\mu \in \mathbb{R}\) such that $$\begin{array}{l} ( A+ \mu I) x = b \\ A+ \mu I \succcurlyeq 0 \\ x^\top x = 1 , \end{array} $$ is necessary and sufficient for the optimality of \(x\). The sufficiency can also be obtained through Karush-Kuhn-Tucker (KKT) conditions, which apply regardless of convexity. This is one of few problems where strong duality holds for a non-convex optimization problems.

Second-order necessary condition on the ball. We also get the following necessary and sufficient condition, that is, the existence of \(\mu \in \mathbb{R}_+\) such that $$\begin{array}{l} ( A+ \mu I) x = b \\ A+ \mu I \succcurlyeq 0 \\ x^\top x \leqslant 1 \\ \mu \, ( 1 \, – \, x^\top x) = 0. \end{array}$$

In both cases, once \(\mu\) is known, we can recover the optimizers \(x\). We now focus on the sphere for simplicity.

Equivalence to a one-dimensional problem

We can define the function \((M,u) \mapsto u^\top M^{-1} u\) as the minimal \(t \in \mathbb{R}\) such that the matrix \(\bigg( \begin{array}{cc} \!M\!\! & \!u\! \\[-.1cm] \!u^\top \!\! & \! t \! \end{array} \bigg) \succcurlyeq 0\). It is thus jointly convex in \((M,u)\), is infinite when \(M\) is not positive-semidefinite (PSD). When \(M\) is PSD but not invertible, the function is finite if and only if \(u\) is in the column space of \(M\). We can define similarly \(u^\top M^{-2} u\).

We can now get the dual problem associated to the Lagrangian in \((3)\), by minimizing it with respect to \(x\), leading to $$\max_{\mu \in \mathbb{R}} \ – \frac{\mu}{2} \, – \frac{1}{2} b^\top ( A+\mu I)^{-1} b, $$ which is a concave maximization problem in one dimension (with the constraint that \(A + \mu I \succcurlyeq 0\)).

Thus, a simple algorithm for solving the problem is to solve this one-dimensional concave maximization problem. Once an eigenvalue decomposition \(A = \sum_{i=1}^n \! \lambda_i u_i u_i^\top\) has been obtained, we need to minimize $$ \tag{4} – \frac{\mu}{2} \, – \frac{1}{2} \sum_{i=1}^n \frac{ (b^\top u_i)^2}{\lambda_i + \mu}. $$

Assuming that \(\lambda_1 \geqslant \lambda_2 \geqslant \cdots \geqslant \lambda_n\), we have the constraint \(\lambda_n + \mu \geqslant 0\). We first need to check if \(\mu = \, – \lambda_n\) is the solution, which occurs when \(b^\top ( A+ \mu I)^{-2} b \leqslant 1\) (the problem is then called “degenerate”, and this can only happen if \(b\) in the eigensubspace of \(A\) associated with eigenvalue \(– \lambda_n\), which is rather uncommon). Otherwise, the minimum is attained at \(\mu > -\lambda_n\) (note that since we have assumed \(b \neq 0\), the problem is strictly concave and thus has a unique maximizer in \(\mu\)). Moreover, \(\mu\) is characterized by the equation $$ \tag{5} b^\top ( A+ \mu I)^{-2} b = 1,$$ which can be obtained directly from the optimality conditions.

This one-dimensional problem can be solved using Newton’s method [6, 7] to estimate \(\mu\) given the eigendecomposition of \(A\). There are also cheaper less precise algorithms that do not require a full eigendecomposition. We will also see below a surprising reformulation as a simple eigenvalue problem.

Other “secular” equations. Equation \((5)\) is often referred to a secular equation. There are other types of similar equations, in particular for rank-one perturbations of the symmetric eigenvalue problem [8, 9].

Semi-definite relaxations

We can now give a more modern take on the quadratic maximization problem on the sphere, using semi-definite programming. We can first rewrite the objective function in Equation \((1)\) as $$f(x) = \frac{1}{2}x^\top A x \, – b^\top x = \frac{1}{2} {\rm tr}(AX) \, – b^\top x, $$ with \(X = xx^\top\). We now have a linear objective in \((X,x)\). Moreover, the matrix \(X\) satisfies the convex constraints $$ X \succcurlyeq xx^\top \Leftrightarrow \left( \begin{array}{cc} \!X\!\! & \!x\! \\[-.1cm] \!x^\top \!\!&\! 1\! \end{array} \right) \succcurlyeq 0, $$ and \({\rm tr}(X) = x^\top x = 1\). However the rank-one constraint is not convex.

A classical tool in optimization is to remove the rank-one constraint, and only obtain a lower bound (a so-called “relaxation”), with the following optimization problem: $$\tag{6} \min_{ X, x} \frac{1}{2} {\rm tr}(AX)-b^\top x \mbox{ such that } \left( \begin{array}{cc} \!X\!\! & \!x\! \\[-.1cm] \!x^\top \!\!&\! 1\! \end{array} \right) \succcurlyeq 0 \mbox{ and } {\rm tr}(X)=1. $$ One can check that the dual problem is exactly Equation \((4)\), and thus the relaxation is here tight. Moreover, the SDP formulation can be used to derive algorithms that do not need a full eigenvalue decomposition [12].

Semi-definite relaxation of QCQP’s. Problems \((1)\) and \((2)\) are in fact instances of quadratically constrained quadratic programming problems, and the problem \((6)\) is the usual semi-definite relaxation. It turns out that with a single constraint, such relaxations are always tight, owing to the S-lemma [10] (see a nice derivation in Boyd and Vandenberghe’s book [11, Appendix B.1]).

Tight sum-of-squares relaxation. Yet another reformulation is through sum-of-squares (see an earlier post), where we consider the feature vector \(\varphi(x) = {x \choose 1}\) and represent non-negative functions as quadratic forms \(x \mapsto \varphi(x)^\top B \varphi(x)\). The problem in \((2)\) can then be relaxed as $$\max_{c \in \mathbb{R}, \ B \succcurlyeq 0} c \ \mbox{ such that } \ f(x) \, – c = \varphi(x)^\top B \varphi(x),$$ which is exactly the tight SDP relaxation above.

Having fun with adding affine constraints. Recently I had to look at Problem \((2)\) with an extra affine constraint, which I will take for simplicity of the form \(c^\top x = 1\) (for a vector \(c \in \mathbb{R}^n\) such that \(\|c\| > 1\) to avoid a trivial problem). By projecting on the subspace orthogonal to \(c \in \mathbb{R}^n\), we obtain again a quadratic minimization problem, this time on a Euclidean sphere embedded in a space of dimension \(n-1\). Therefore, we can apply the above techniques on the reduced problem. However, I did not want to do that and wanted keep the original formulation on \(\mathbb{R}^n\), and then tried to use duality to solve it. Two natural possibilities emerge here.

In order to solve it, we could first imagine using Lagrange duality, with a Lagrange multiplier \(\mu\) for the constraint \(x^\top x = 1\) (this worked exactly without the extra affine constraint), and now an extra Lagrange multiplier \(\nu\) for the constraint \(c^\top x = 1\). This leads to the Lagrangian $$ \mathcal{L}(x,\mu,\nu) = \frac{1}{2} x^\top A x\, – b^\top x + \frac{1}{2} \mu ( x^\top x\, – 1) + \nu( c^\top x -1),$$ and thus, after a short calculation, the dual problem $$\max_{\mu,\nu \in \mathbb{R}} \ -\frac{\mu}{2} \, – \nu \, – \frac{1}{2} (b \, – \nu c)^\top ( A + \mu I)^{-1} (b \, – \nu c).$$ Another Lagrangian can be obtained with the equivalent constraint \((c^\top x\, – 1)^2 = 0\), leading to a new Lagrangian $$ \mathcal{L}'(x,\mu,\nu’) = \frac{1}{2} x^\top A x\, -b^\top x + \frac{1}{2} \mu ( x^\top x\, – 1) + \frac{1}{2} \nu’ (c^\top x \, – 1)^2,$$ and then the dual problem $$\max_{\mu,\nu’ \in \mathbb{R}} \ -\frac{\mu}{2} \, -\frac{\nu’}{2} \, – \frac{1}{2} (b+\nu’ c)^\top ( A + \nu’ cc^\top + \mu I)^{-1} (b+\nu’ c).$$ Are they both tight? Make up your own mind and see the bottom of the post for the answer.

Amazing eigenvalue reformulations

The Newton method to solve the one-dimensional problem is efficient, but requires some safeguards to work properly, and a full eigenvalue decomposition. It turns out that one can obtain exact reformulations as eigenvalue problems for a single eigenvalue, for which efficient packages exist.

From [3], for the optimization on the sphere, we can obtain the optimal \(\mu\) from the largest real eigenvalue of the following non symmetric matrix: $$\left( \begin{array}{cc} \!-A\! & \!\! I\! \\[-.1cm] \! bb^\top \!& \!\! -A\! \end{array} \right).$$ Indeed, one can check that, in the non-degenerate case, given the optimal \((x,\mu)\), then \(y = \left( \begin{array}{c} \!(A+\mu I)^{-1} x \! \\[-.1cm] x \end{array} \right)\) is an eigenvector of the \(2n \times 2n\) matrix above, with eigenvalue \(\mu\).

This leads to two lines of code to solve the problem, at least for the non-degenerate case! See more details in [3, 14], in particular, to deal with the degenerate case, often called the “hard case”. See the code snippets in Matlab, Julia, and Python.

Matlab[y,mu] = eigs([-A, eye(n); b*b', -A],1,'largestreal'); 
x = y(n+1:2*n) / (b'*y(1:n)); 
or x = sign(b'*y(1:n)) * y(n+1:2*n) / norm(y(n+1:2*n));
JuliaE = eigs([-A I(n) ; b*b' -A ], nev=1 , which=:LR )
y, μ = E[2][:, 1], E[1][1]
x = y[n+1:2n] ./ (b' * y[1:n])
or x = sign.(b' * y[1:n]) .* y[n+1:2n] / norm(y[n+1:2*n])
PythonM = np.block([[-A, np.eye(n)], [np.outer(b,b), -A]])
mu, y = scipy.sparse.linalg.eigs(M, k=1, which='LR', return_eigenvectors=True)
x = y[n:2*n]/(np.dot(b,y[:n])) 
or x = np.sign(np.dot(b,y[:n]))*y[n:2*n]/np.linalg.norm(y[n:2*n])

Symmetric generalized eigenproblems. If you prefer symmetric matrices, one can obtain a similar result with the generalized eigenvector of the two matrices $$\left( \begin{array}{cc} \!I\!\! & \!\!-A\! \\ \!-A\!\! & \! bb^\top\! \end{array} \right) \ \mbox{ and } \ \left( \begin{array}{cc} \! 0 \! & \! \! I \\[-.1cm] I \!\! & \!\! 0 \end{array} \right).$$ If you want to avoid forming a potentially dense matrix \(bb^\top\), you and use instead the matrices $$\left( \begin{array}{ccc} \!-1\! & \! 0\! & \! b^\top \! \\[-1.cm] \!0\! &\! I \!&\! -A \! \\[-.1cm]\! b \!&\! -A \!& \!0\! \end{array} \right) \ \mbox{ and } \ \left( \begin{array}{ccc} \! 0\! &\! 0 \!& \! 0\! \\[-.1cm]\! 0 \! & \! 0 \!&\! I \! \\[-.1cm] \! 0 \! &\! I \! &\! 0 \! \end{array} \right).$$ See all details in [14]. Note that beyond the two-line code above that lead to precise solutions, more efficient algorithms exist that lead to approximate solutions [14, 15].

Conclusion

In this blog post, I described one of the few non-convex problems where strong duality holds. There are many other instances within combinatorial optimization (that is, with variables in \(\{0,1\}^n\) or \(\{-1,1\}^n\)), in particular related to submodularity. I will hopefully cover these in future posts.

Acknowledgements. I would like to thank Alessandro Rudi, Gaspard Beugnot, and ChatGPT for helping with the code snippets.

References

[2] George E. Forsythe, and Gene H. Golub. On the stationary values of a second-degree polynomial on the unit sphereJournal of the Society for Industrial and Applied Mathematics, 13(4): 1050-1068, 1965.[3] Walter Gander, Gene H. Golub, and Urs Von Matt. A constrained eigenvalue problemLinear Algebra and its applications 114: 815-839, 1989.
[4] Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint. Trust region methods. Society for Industrial and Applied Mathematics, 2000.
[5] William W. Hager. Minimizing a quadratic over a sphereSIAM Journal on Optimization, 12(1):188-208, 2001.
[6] Danny C. Sorensen. Newton’s method with a model trust region modification. SIAM Journal on Numerical Analysis, 19(2):409-426, 1982.
[7] Danny C. Sorensen. Minimization of a large-scale quadratic function subject to a spherical constraintSIAM Journal on Optimization, 7(1):141-161, 1997.
[8] James R. Bunch, Christopher P. Nielsen, Danny C. Sorensen. Rank-one modification of the symmetric eigenproblemNumerische Mathematik, 31(1):31-48, 1978.
[9] Ming Gu, Stanley C. Eisenstat. A stable and efficient algorithm for the rank-one modification of the symmetric eigenproblemSIAM Journal on Matrix Analysis and Applications ,15(4):1266-1276, 1994.
[10] Imre Pólik, Tamás Terlaky. A survey of the S-lemmaSIAM Review, 49(3):371-418, 2007.
[11] Stephen P. Boyd, Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[12] Franz Rendl, Henry Wolkowicz. A semidefinite framework for trust region subproblems with applications to large scale minimizationMathematical Programming, 77(1):273-299, 1997.
[13] Satoru Adachi, Satoru Iwata, Yuji Nakatsukasa, Akiko Takeda. Solving the trust-region subproblem by a generalized eigenvalue problemSIAM Journal on Optimization27(1):269-291, 2017.
[14] Elad Hazan, Tomer Koren. A linear-time algorithm for trust region problemsMathematical Programming, 158(1-2):363-381, 2016.
[15] Amir Beck, Yakov Vaisbourd. Globally solving the trust region subproblem using simple first-order methodsSIAM Journal on Optimization, 28(3):1951-1967, 2018.

Having fun with affine constraints

Let’s know look at the solution! The second relaxation is tight, while the first is not. To prove that we have a non-tight solution for the first relaxation, we can simply find a counter-example from random matrices in dimension \(n = 2\). For example, for $$A = \left( \begin{array}{cc} \!3\! &\! 0\! \\[-.1cm] \! 0\! & \!-2 \! \end{array} \right) , \ \ b = \left( \begin{array}{c} \! 0\! \\[-.1cm] \!-1 \! \end{array} \right), \mbox{ and } \ c = \left( \begin{array}{c} \! 0\! \\[-.1cm] \!2 \! \end{array} \right),$$ a minimizer is \( x = \left( \begin{array}{c}\! \sqrt{3}/2\! \\[-.1cm] \! 1/2 \! \end{array} \right)\), with optimal value \(11/8\), while the non-tight relaxation leads to a value of \(-1/2\).

To show the tightness of the second relaxation, we first notice that the convex problem is equivalent to the following SDP relaxation: $$\min_{ X, x} \frac{1}{2} {\rm tr}(AX)-b^\top x \mbox{ such that } \left( \begin{array}{cc} \!X\!\! & \!x\! \\[-.1cm] \!x^\top \!\!&\! 1\! \end{array} \right) \succcurlyeq 0, {\rm tr}(X)=1, \mbox{ and } {\rm tr}(cc^\top X)\, – 2 t c^\top x + t^2 = 0. $$ Given the PSD constraint and the fact that $${\rm tr}(cc^\top X)\, – 2 t c^\top x + t^2 = {\rm tr} \left( \begin{array}{cc} \!X\!\! & \!x\! \\[-.1cm] \!x^\top \!\!&\! 1\! \end{array} \right)\left( \begin{array}{cc} \! cc^\top \! & \! -tc \! \\[-.1cm] \!-tc^\top \! & \! t^2 \! \end{array} \right),$$ the new constraint implies that $$ \left( \begin{array}{cc} \!X\!\! & \!x\! \\[-.1cm] \!x^\top \!\!&\! 1\! \end{array} \right) \left( \begin{array}{c} \!c\! \\[-.1cm] \!-t \! \end{array} \right)= 0, $$ that is, \(Xc = t x\) and \(c^\top x = t\). One can then check that these constraints are exactly equivalent to a projection of the problem in to a space of dimension \(n-1\). The incorrect relaxation only has \(c^\top x = t\), which is not enough. It took me a while to realize it…

By Francis Bach

Responsibility

from Computational Complexity

Nature laid out their ground rules for large language models like ChatGPT including

No LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Let's focus on the last word "responsibility". What does that mean for an author? It means we can hold an author, or set of authors, responsible for any issues in the paper such as

  • The proofs, calculations, code, formulas, measurements, statistics, and other details of the research.
  • Any interpretation or conclusions made in the article
  • Properly citing related work, especially work that calls into question the novelty of this research
  • The article does not contain text identical or very similar to previous work.
  • Anything else described in the article.
The authors should take reasonable measures to ensure that a paper is free from any issues above. Nobody is perfect and if you make a mistake in a paper, you should, as with all mistakes, take responsibility and acknowledge the problems, do everything you can to rectify the issues, such as publishing a corrigendum if needed, and work to ensure you won't make similar mistakes in the future.
Mistakes can arise outside of an author's actions. Perhaps a computer chip makes faulty calculations, you relied on a faulty theorem in another paper, your main result appeared in a paper fifteen years ago in an obscure journal, a LaTeX package for the journal created some mistakes in the formulas or a student who helped with the research or exposition took a lazy way out, or you put too much trust in AI generative text. Nevertheless the responsibility remains with the authors. 
Could an AI ever take responsibility for an academic paper? Would a toaster ever take responsibility for burning my breakfast?

 

By Lance Fortnow

Nature laid out their ground rules for large language models like ChatGPT including

No LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Let's focus on the last word "responsibility". What does that mean for an author? It means we can hold an author, or set of authors, responsible for any issues in the paper such as

  • The proofs, calculations, code, formulas, measurements, statistics, and other details of the research.
  • Any interpretation or conclusions made in the article
  • Properly citing related work, especially work that calls into question the novelty of this research
  • The article does not contain text identical or very similar to previous work.
  • Anything else described in the article.
The authors should take reasonable measures to ensure that a paper is free from any issues above. Nobody is perfect and if you make a mistake in a paper, you should, as with all mistakes, take responsibility and acknowledge the problems, do everything you can to rectify the issues, such as publishing a corrigendum if needed, and work to ensure you won't make similar mistakes in the future.

Mistakes can arise outside of an author's actions. Perhaps a computer chip makes faulty calculations, you relied on a faulty theorem in another paper, your main result appeared in a paper fifteen years ago in an obscure journal, a LaTeX package for the journal created some mistakes in the formulas or a student who helped with the research or exposition took a lazy way out, or you put too much trust in AI generative text. Nevertheless the responsibility remains with the authors. 

Could an AI ever take responsibility for an academic paper? Would a toaster ever take responsibility for burning my breakfast?


 

By Lance Fortnow

Assistant/Associate Professor at Ecole polytechnique, Institut Polytechnique de Paris (apply by March 15, 2023)

from CCI: jobs

Ecole polytechnique, leading engineering school in France, member of the Institut Polytechnique de Paris, welcomes outstanding applications for one assistant Monge Professor in Computer Science, specialty “Foundations of Computer Science”, tenure track teaching/research position. Website: portail.polytechnique.edu/informatique/en/recruitment2023 Email: samuel.mimram@polytechnique.edu

Ecole polytechnique, leading engineering school in France, member of the
Institut Polytechnique de Paris, welcomes outstanding applications for one assistant Monge Professor in Computer Science, specialty “Foundations of Computer Science”, tenure track teaching/research position.

Website: https://portail.polytechnique.edu/informatique/en/recruitment2023
Email: samuel.mimram@polytechnique.edu

By shacharlovett

Mathematics of the impossible: Computational Complexity, Chapter 3: The grand challenge

from Emanuele Viola

All posts in this series. A PDF version of this post will be published with a delay, but if you’d like to have it soon let me know. Contents 1 The grand challenge 1.1 Information bottleneck: Palindromes requires quadratic time on TMs 1.2 Counting: impossibility results for non-explicit functions 1.3 Diagonalization: Enumerating machines 1.3.1 1.4 […]

All posts in this series.
A PDF version of this post will be published with a delay, but if you’d like to have it soon let me know.


Contents

PhDComics

As mentioned in Chapter ??, our ability to prove impossibility results related to efficient computation appears very limited. We can now express this situation more precisely with the models we’ve introduced since then.

It is consistent with our knowledge that any problem in a standard algorithm textbook can be solved

  1. in Time cn^{2} on a TM, and
  2. in Time cn on a 2-TM, and
  3. by circuits of size cn.

Note that 2. implies 1. by Theorem ??.

In this chapter we begin to present several impossibility results, covering a variety of techniques which will be used later as well. As hinted above, they appear somewhat weak. However, jumping ahead, there is a flip side to all of this:

  1. At times, contrary to our intuition, stronger impossibility results are actually false. One example appears in Chapter ??. A list will be given later.
  2. Many times, the impossibility results that we can prove turn out to be, surprisingly, just “short” of proving major results. Here by “major result” I mean a result that would be phenomenal and that was in focus long before the connection was established. We will see several examples of this (section º??, section º??).
  3. Yet other times, one can identify broad classes of proof techniques, and argue that impossibility results can’t be proved with them (section º??).

Given this situation, I don’t subscribe to the general belief that stronger impossibility results are true and we just can’t prove them.

1.1 Information bottleneck: Palindromes requires quadratic time on TMs

Intuitively, the weakness of TMs is the bottleneck of passing information from one end of the tape to the other. We now show how to formalize this and use it show that deciding if a string is a palindrome requires quadratic time on TMs, which is tight and likely matches the time in Exercise ??. The same bound can be shown for other functions; palindromes just happen to be convenient to obtain matching bounds.

Theorem 1.1. \text {Palindromes}\not \in \text {TM-Time}(t(n)) for any t(n)=o(n^{2}).

More precisely, for every n and s, an s-state TM that decides if an n-bit input is a palindrome requires time \ge cn^{2}/\log s.

The main concept that allows us to formalize the information bottleneck mentioned above is the following.

Definition 1.1. A crossing sequence of a TM M on input x and boundary i, abbreviated i-CS, is the sequence of states that M is transitioning to when crossing cell boundary i (i.e., going from Cell i to i+1 or vice versa) during the computation on x.

The idea in the proof is very interesting. If M accepts inputs x and y and those two inputs have the same i-CS for some i, then we can “stitch together” the computation of M on x and y at boundary i to create a new input z that is still accepted by M. The input z is formed by picking bits from x to the left of cell boundary i and bits from y to the right of i:

\begin{aligned} z:=x_{1}x_{2}\cdots x_{i}y_{i+1}y_{i+2}\cdots y_{n}. \end{aligned}

The proof that z is still accepted is left as an exercise.

Now, for many problems, input z should not be accepted by M, and this gives a contradiction. In particular this will be be the case for palindromes. We are going to find two palindromes x and y that have the same i-CS for some i, but the corresponding z is not a palindrome, yet it is still accepted by M. We can find these two palindromes if M takes too little time. The basic idea is that if M runs in time t, because i-CSs for different i correspond to different steps of the computation, for every input there is a value of i such that the i-CS is short, namely has length at most t(|x|)/n. If t(n) is much less than n^{2}, the length of this CS is much less than n, from which we can conclude that the number of CSs is much less than the number of inputs, and so we can find two inputs with the same CS.

ProofLet n be divisible by four, without loss of generality, and consider palindromes of the form

\begin{aligned} p(x):=x0^{n/2}x^{R} \end{aligned}

where x\in \{0,1\} ^{n/4} and x^{R} is the reverse of x.

Assume there are x\ne y in \{0,1\} ^{n/4} and i in the middle part, i.e., n/4\le i\le 3n/4-1, so that the i-CS of M on p(x) and p(y) is the same. Then we can define z:=x0^{n/2}y^{R} which is not a palindrome but is still accepted by M, concluding the proof.

There remains to prove that the assumption of Theorem 1.1 implies the assumption in the previous paragraph. Suppose M runs in time t. Since crossing sequences at different boundaries correspond to different steps of the computation, for every x\in \{0,1\} ^{n/4} there is a value of i in the middle part such that the i-CS of M on p(x) has length \le ct/n. This implies that there is an i in the middle s.t. there are \ge c2^{n/4}/n inputs x for which the i-CS of M on x has length \le ct/n.

For fixed i, the number of i-CS of length \le \ell is \le (s+1)^{\ell }.

Hence there are x\ne y for which p(x) and p(y) have the same i-CS whenever c2^{n/4}/n\ge (s+1)^{ct/n}. Taking logs one gets ct\log (s)/n\le cn. QED

Exercise 1.1. For every s and n describe an s-state TM deciding palindromes in time cn^{2}/\log s (matching Theorem 1.1).

Exercise 1.2. Let L:=\{xx:x\in \{0,1\} ^{*}\}. Show L\in \text {TM-Time\ensuremath {(cn^{2})}}, and prove this is tight up to constants.

One may be tempted to think that it is not hard to prove stronger bounds for similar functions. In fact as mentioned above this has resisted all attempts!

1.2 Counting: impossibility results for non-explicit functions

Proving the existence of hard functions is simple: Just count. If there are more functions than efficient machines, some function is not efficiently computable. This is applicable to any model; next we state it for TMs for concreteness. Later we will state it for circuits.

Theorem 1.2. There exists a function f:\{0,1\} ^{n}\to \{0,1\} that cannot be computed by a TM with s states unless cs\log s\ge 2^{n}, regardless of time.

ProofThe number of TMs with s states is \le s{}^{cs}, and each TM computes at most one function (it may compute none, if it does not stop). The number of functions on n bits is 2^{2^{n}}. Hence if 2^{n}>cs\log s some function cannot be computed. QED

Note this bound is not far from that in Exercise ??.

It is instructive to present this basic result as an application of the probabilistic method:

ProofLet us pick f uniformly at random. We want to show that

\begin{aligned} \mathbb {P}_{f}[\exists \text { an }s\text {-state TM }M\text { such that }M(x)=f(x)\text { for every }x\in \{0,1\} ^{n}]<1. \end{aligned}

Indeed, if the probability is less than 1 than some function exists that cannot be computed. By a union bound we can say that this probability is

\begin{aligned} \le \sum _{M}\mathbb {P}_{f}[M(x)=f(x)\text { for every }x\in \{0,1\} ^{n}], \end{aligned}

where the sum is over all s-state machines. Each probability in the sum is (1/2)^{2^{n}}, since M is fixed. The number of s-state machines is \le s^{cs}. So the sum is \le s^{cs}(1/2)^{2^{n}}, and we can conclude as before taking logs. QED

1.3 Diagonalization: Enumerating machines

Can you compute more if you have more time? For example, can you write a program that runs in time n^{2} and computes something that cannot be computed in time n^{1.5}? The answer is yes for trivial reasons if we allow for non-boolean functions.

Exercise 1.3. Give a function f:\{0,1\}^* \to \{0,1\}^* in \text {Time}(n^{2})\setminus \text {Time}(n^{1.5}).

The answer is more interesting if the functions are boolean. Such results are known as time hierarchies, and a generic technique for proving them is diagonalization, applicable to any model.

We first illustrate the result in the simpler case of partial functions, which contains the main ideas. Later we discuss total functions.

Theorem 1.3. There is a partial function in \text {TM-Time}(t(n)) such that any TM M computing it runs in time \ge c_{M}t(n), for any t(n)=\omega (1).

In other words, \text {Time}(t(n))\supsetneq \text {Time}(o(t(n)).

ProofConsider the TM H that on input x=(M,1^{n-|M|}) of length n runs M on x until it stops and then complements the answer. (We can use a simple encoding of these pairs, for example every even-position bit of the description of M is a 0.)

Now define X to be the subset of pairs s.t. M runs in time \le t(n)/|M|^{c} on inputs of length n, and |M|^{c}\le t(n)/2.

On these inputs, H runs in time |M|^{c}+|M|^{c}\cdot t(n)/|M|^{c}\le t(n), as desired. To accomplish this, H can begin by making a copy of M in time |M|^{c}\le t(n)/2. Then every step of the computation of M can be simulated by H with |M|^{c} steps, always keeping the description of M to the left of the head.

Now suppose N computes the same function as H in time t(n)/|N|^{c}. Note that x:=(N,1^{n-|N|}) falls in the domain X of the function, for n sufficiently large, using that t(n)=\omega (1). Now consider running N on x. We have N(x)=H(x) by supposition, but H(x) is the complement of N(x), contradiction. QED

This proof is somewhat unsatisfactory; in particular we have no control on the running time of H on inputs not in X. It is desirable to have a version of this fundamental result for total functions. Such a version is stated next. It requires additional assumptions on t and a larger gap between the running times. Perhaps surprisingly, as we shall discuss, both requirements are essential.

Theorem 1.4. Let t(n)\ge n be a function. Suppose that f(x):=t(|x|) is in \text {TM-Time}(t(n)/\log ^{c}n).

There is a total function in \text {TM-Time}(ct(n)\log t(n)) that cannot be computed by any TM M in time c_{M}t(n).

The assumption about t is satisfied by all standard functions, including all those in this book. (For example, take t(n):=n^{2}. The corresponding f is then |x|^{2}. To compute f on input x of n bits we can first compute |x|=n in time cn\log n (Exercise ??). This is a number of b:=\log n bits. We can then square this number in time b^{c}. Note that the time to compute f(x) is dominated by the cn\log n term coming from computing |x|, which does not depend on t and is much less than the n^{2}/\log ^{c}n in the assumption.) The assumption cannot be removed altogether because there exist pathological functions t for which the result is false.

The proof is similar to that of Theorem 1.3. However, to make the function total we need to deal with arbitrary machines, which may not run in the desired time or even stop at all. The solution is to clock H in a manner similar to the proof of the universal machine, Lemma ??.

Also, we define a slightly different language to give a stronger result – a unary language – and to avoid some minor technical details (the possibility that the computation of f erases the input).

We define a TM H that on input 1^{n} obtains a description of a TM M, computes t(n), and then simulates M on input 1^{n} for t(n) steps in a way similar to Lemma ??, and if M stops then H outputs the complement of the output of M; and if M does not stop then H stops and outputs anything. Now H computes a function in time about t(n). We argue that this function cannot be computed in much less time as follows. Suppose some TM M computes the function in time somewhat less than t(n). Then we can pick an 1^{n} for which H obtains the description of this M, and the simulation always stops. Hence, for that 1^{n} we would obtain M(1^{n})=H(1^{n})=1-M(1^{n}), which is a contradiction.

However, there are interesting differences with the simulation in Lemma ??. In that lemma the universal machine U was clocking the steps of the machine M being simulated. Now instead we need to clock the steps of U itself, even while U is parsing the description of M to compute its transition function. This is necessary to guarantee that H does not waste time on big TM descriptions.

Whereas in Lemma ?? the tape was arranged as

\begin{aligned} (x,M,\underline {i},t',y), \end{aligned}

it will now be arranged as

\begin{aligned} (x,M',\underline {i},t',M'',y) \end{aligned}

which is parsed as follows. The description of M is M'M'', M is in state \underline {i}, the tape of M contains xy and the head is on the left-most symbol of y. The integer t' is the counter decreased at every step

ProofDefine TM H that on input 1^{n}:

  1. Compute (n,t(n),1^{n}).
  2. Compute (M_{n},t(n),1^{n}). Here M_{n} is obtained from n by removing all left-most zeroes until the first 1. I.e., if n=0^{j}1x then M_{n}=x. This is similar to the fact that a program does not change if you add, say, empty lines at the bottom.
  3. Simulate M_{n} on 1^{n}, reducing the counter t(n) at every step, including those parsing M_{n}, as explained before.
  4. If M_{n} stops before the counter reaches 0, output the complement of its output. If the counter reaches 0 stop and output anything.

Running time of H.

  1. Computing n is similar to Exercise ??. By assumption t(n) is computable in time t(n)/\log ^{c}n. Our definition of computation allows for erasing the input, but we can keep n around spending at most another \log ^{c}n factor. Thus we can compute (n,t(n)) in time t(n). Finally, in case it was erased, we can re-compute 1^{n} in time cn\log n by keeping a counter (initialized to a copy of n).
  2. This takes time c(n+t(n)): simply scan the input and remove zeroes.
  3. Decreasing the counter takes c|t(n)| steps. Hence this simulation will take ct(n)\log t(n) time.

Overall the running time is ct(n)\log t(n).

Proof that the function computed by H requires much time. Suppose some TM M computes the same function as H. Consider inputs 1^{n} where n=0^{j}1M. Parsing the description of M to compute its transition function takes time c_{M}, a value that depends on M only and not on j. Hence H will simulate \lfloor t(n)/c_{M}\rfloor steps of M. If M stops within that time (which requires t(n) to be larger than b_{M}, and so n and j sufficiently large compared to M) then the simulation terminates and we reach a contradiction as explained before. QED

The extra \log t(n) factor cannot be reduced because of the surprising result presented in Theorem 1.5 showing that, on TMs, time o(n\log n) equals time n for computing total functions.

However, tighter time hierarchies hold for more powerful models, like RAMs. Also, a time hierarchy for total functions for \text {BPTime} is… an open problem! But a hierarchy is known for partial functions.

Exercise 1.4. (1) State and prove a tighter time hierarchy for Time (which recall corresponds to RAMs) for total functions. You don’t need to address simulation details, but you need to explain why a sharper separation is possible.

(2) Explain the difficulty in extending (1) to \text {BPTime}.

(3) State and prove a time hierarchy for \text {BPTime} for partial functions.

1.3.1 \text {TM-Time}(o(n\log n))=\text {TM-Time}(n)

In this subsection we prove the result in the title, which we also mentioned earlier. First let us state the result formally.

Theorem 1.5. [12] Let f:\{0,1\}^* \to \{0,1\} be in \text {TM-Time}(t(n)) for a t(n)=o(n\log n). Then f\in \text {TM-Time}(n).

Note that time n is barely enough to scan the input. Indeed, the corresponding machines in Theorem 1.5 will only move the head in one direction.

The rest of this section is devoted to proving the above theorem. Let M be a machine for f witnessing the assumption of the theorem. We can assume that M stops on every input (even though our definition of time only applies to large enough inputs), possibly by adding \le n to the time, which does not change the assumption on t(n). The theorem now follows from the combination of the next two lemmas.

Lemma 1.1. Let M be a TM running in time t(n)\le o(n\log n). Then on every input x\in \{0,1\}^* every i-CS with i\le |x| has length \le c_{M}.

ProofFor an integer b let x(b) be a shortest input such that there exists j\in \{0,1,\ldots ,n\} for which the j-CS in the computation of M on x(b) has length \ge b. Let n(b):=|x(b)|.

Exercise 1.5. Prove n(b)\to \infty for b\to \infty .

There are n(b)+1\ge n(b) tape boundaries within or bordering x(b). If we pick a boundary uniformly at random, the average length of a CS on x(b) is \le t(n(b))/n(b). Hence there are \ge n(b)/2 choices for i s.t. the i-CS on x(b) has length \le 2t(n(b))/n(b).

The number of such crossing sequences is

\begin{aligned} \le (s+1)^{2t(n(b))/n(b)}=(s+1)^{o(n(b)\log (n(b))/n(b)}=n(b)^{o(\log s)}. \end{aligned}

Hence, the same crossing sequence occurs at \ge (n(b)/2)/n(b)^{o(\log s)}\ge 4 positions i, using that n(b) is large enough.

Of these four, one could be the CS of length \ge b from the definition of x(b). Of the other three, two are on the same side of j. We can remove the corresponding interval of the input without removing the CS of length \ge b. Hence we obtained a shorter input with a CS of length \ge b. Contradiction. QED

Lemma 1.2. Suppose f:\{0,1\}^* \to \{0,1\} is computable by a TM such that on every input x, every i-CS with i\le |x| has length \le b. Then f is computable in time n by a TM with c_{b} states that only moves the head in one direction.

Exercise 1.6. Prove this.

1.4 Circuits

The situation for circuits is similar to that of 2-TMs, and by Theorem ?? we know that proving \omega (n\log n) bounds on circuits is harder than for 2-TMs. Even a bound of cn is unknown. The following is a circuit version of Theorem 1.2. Again, the bound is close to what we saw in Theorem ??.

Theorem 1.6. [3] There are functions f:\{0,1\} ^{n}\to \{0,1\} that require circuits of size \ge (1-o(1))2^{n}/n, for every n.

One can prove a hierarchy for circuit size, by combining Theorem 1.6 and Theorem ??. As stated, the results give that circuits of size cs compute more functions than those of size s. In fact, the “o(1)” in the theorems is small, so one can prove a sharper result. But a stronger and more enjoyable argument exists.

Theorem 1.7. [Hierarchy for circuit size] For every n and s\le c2^{n}/n there is a function f:\zonzo that can be computed by circuits of size s+cn but not by circuits of size s.

ProofConsider a path from the all-zero function to a function which requires circuits of size \ge s, guaranteed to exist by Theorem 1.6, changing the output of the function on one input at each step of the path. Let h be the first function that requires size >s, and let h' be the function right before that in the path. Note that h' has circuits of size \le s, and h can be computed from h' by changing the value on a single input. The latter can be implemented by circuits of size cn. QED

Exercise 1.7. Prove a stronger hierarchy result for alternating circuits, where the “cn” in Theorem 1.7 is replaced with “c.”

In fact, this improvement is possible even for (non aternating) circuits, see Problem 1.2.

1.4.1 The circuit won’t fit in the universe: Non-asymptotic, cosmological results

Most of the results in this book are asymptotic, i.e., we do not explicitly work out the constants because they become irrelevant for larger and larger input lengths. As stated, these results don’t say anything for inputs of a fixed length. For example, any function on 10^{100} bits is in Time(c).

However, it is important to note that all the proofs are constructive and one can work out the constants, and produce non-asymptotic results. We state next one representative example when this was done. It is about a problem in logic, an area which often produces very hard problems.

On an alphabet of size 63, the language used to write formulas includes first-order variables that range over \mathbb {N}, second-order variables that range over finite subsets of \mathbb {N}, the predicates “y=x+1” and “x\in S” where x and y denote first-order variables and S denotes a set variable, and standard quantifiers, connectives, constants, binary relation symbols on integers, and set equality. For example one can write things like “every finite set has a maximum:” \forall S\exists x\in S\forall y\in S,y\le x.

Theorem 1.8. [4] To decide the truth of logical formulas of length at most 610 in this language requires a circuit containing at least 10^{125} gates. So even if each gate were the size of a proton, the circuit would not fit in the known universe.

Their result applies even to randomized circuits with error 1/3, if 610 is replaced with 614. (We can define randomized circuits analogously to BPTime.)

1.5 Problems

Problem 1.1. Hierarchy Theorem 1.4 only gives a function f that cannot be computed fast on all large enough input lengths: it is consistent with the theorem that f can be computed fast on infinitely many input lengths.

Give a function f:\{0,1\}^* \to \{0,1\}^* mapping x to \{0,1\} ^{\log \log \log |x|} that is computable in time n^{c} but such that for any TM M running in time n^{2} the following holds. For every n\ge c_{M} and every x\in \{0,1\} ^{n} such that M(x)\ne f(x).

Hint: Note the range of f. Can this result hold as stated with range \{0,1\} ?

Problem 1.2. Replace “cn” in Theorem 1.7 with “c.”

Problem 1.3. Prove that \{0^{i}1^{i}:i\ge 0\}\in \text {TM-Time\ensuremath {(cn\log n)\setminus \text {TM-Time}(n)}.}

This shows Theorem 1.5 is tight, and gives an explicit problem witnessing the time-hierarchy separation in Theorem 1.4, for a specific time bound. For the negative result, don’t use pumping lemmas or other characterization results not covered in this book.

Problem 1.4. The following argument contradicts Theorem 1.4; what is wrong with it?

“By Theorem 1.5, \text {TM-Time}(n\log ^{0.9}n)=\text {TM-Time}(n). By padding (Theorem 1.5), \text {TM-Time}(n\log ^{1.1}n)=\text {TM-Time}(n\log ^{0.9}n). Hence \text {TM-Time}(n\log ^{1.1}n)=\text {TM-Time}(n).

References

[1]   F. C. Hennie. One-tape, off-line turing machine computations. Information and Control, 8(6):553–578, 1965.

[2]   Kojiro Kobayashi. On the structure of one-tape nondeterministic turing machine time hierarchy. Theor. Comput. Sci., 40:175–193, 1985.

[3]   Claude E. Shannon. The synthesis of two-terminal switching circuits. Bell System Tech. J., 28:59–98, 1949.

[4]   Larry Stockmeyer and Albert R. Meyer. Cosmological lower bound on the circuit complexity of a small problem in logic. J. ACM, 49(6):753–784, 2002.

By Manu

Professor in Theoretical Computer Science at The University of Melbourne (apply by February 12, 2023)

from CCI: jobs

The University of Melbourne is seeking an outstanding academic with expertise in Theoretical Computer Science to join an internationally recognised group of academics within the School of Computing and Information Systems in the Faculty of Engineering and Information Technology. We are seeking to appoint at the Full Professor level. The University is an equal opportunity […]

The University of Melbourne is seeking an outstanding academic with expertise in Theoretical Computer Science to join an internationally recognised group of academics within the School of Computing and Information Systems in the Faculty of Engineering and Information Technology. We are seeking to appoint at the Full Professor level. The University is an equal opportunity employer.

Website: https://jobs.unimelb.edu.au/en/job/911413/professor-in-theoretical-computing-science
Email: awirth@unimelb.edu.au

By shacharlovett

Hardness of braided quantum circuit optimization in the surface code

from arXiv: Computational Complexity

Authors: Kunihiro Wasa, Shin Nishio, Koki Suetsugu, Michael Hanks, Ashley Stephens, Yu Yokoi, Kae Nemoto

Large-scale quantum information processing requires the use of quantum error correcting codes to mitigate the effects of noise in quantum devices. Topological error-correcting codes, such as surface codes, are promising candidates as they can be implemented using only local interactions in a two-dimensional array of physical qubits. Procedures such as defect braiding and lattice surgery can then be used to realize a fault-tolerant universal set of gates on the logical space of such topological codes. However, error correction also introduces a significant overhead in computation time, the number of physical qubits, and the number of physical gates. While optimizing fault-tolerant circuits to minimize this overhead is critical, the computational complexity of such optimization problems remains unknown. This ambiguity leaves room for doubt surrounding the most effective methods for compiling fault-tolerant circuits for a large-scale quantum computer. In this paper, we show that the optimization of a special subset of braided quantum circuits is NP-hard by a polynomial-time reduction of the optimization problem into a specific problem called Planar Rectilinear 3SAT.

Authors: Kunihiro Wasa, Shin Nishio, Koki Suetsugu, Michael Hanks, Ashley Stephens, Yu Yokoi, Kae Nemoto

Large-scale quantum information processing requires the use of quantum error correcting codes to mitigate the effects of noise in quantum devices. Topological error-correcting codes, such as surface codes, are promising candidates as they can be implemented using only local interactions in a two-dimensional array of physical qubits. Procedures such as defect braiding and lattice surgery can then be used to realize a fault-tolerant universal set of gates on the logical space of such topological codes. However, error correction also introduces a significant overhead in computation time, the number of physical qubits, and the number of physical gates. While optimizing fault-tolerant circuits to minimize this overhead is critical, the computational complexity of such optimization problems remains unknown. This ambiguity leaves room for doubt surrounding the most effective methods for compiling fault-tolerant circuits for a large-scale quantum computer. In this paper, we show that the optimization of a special subset of braided quantum circuits is NP-hard by a polynomial-time reduction of the optimization problem into a specific problem called Planar Rectilinear 3SAT.

Parameterized Complexity of Weighted Team Definability

from arXiv: Computational Complexity

Authors: Juha Kontinen, Yasir Mahmood, Arne Meier, Heribert Vollmer

In this article, we study the complexity of weighted team definability for logics with team semantics. This problem is a natural analogue of one of the most studied problems in parameterized complexity, the notion of weighted Fagin-definability, which is formulated in terms of satisfaction of first-order formulas with free relation variables. We focus on the parameterized complexity of weighted team definability for a fixed formula phi of central team-based logics. Given a first-order structure A and the parameter value k as input, the question is to determine whether A,T models phi for some team T of size k. We show several results on the complexity of this problem for dependence, independence, and inclusion logic formulas. Moreover, we also relate the complexity of weighted team definability to the complexity classes in the well-known W-hierarchy as well as paraNP.

Authors: Juha Kontinen, Yasir Mahmood, Arne Meier, Heribert Vollmer

In this article, we study the complexity of weighted team definability for logics with team semantics. This problem is a natural analogue of one of the most studied problems in parameterized complexity, the notion of weighted Fagin-definability, which is formulated in terms of satisfaction of first-order formulas with free relation variables. We focus on the parameterized complexity of weighted team definability for a fixed formula phi of central team-based logics. Given a first-order structure A and the parameter value k as input, the question is to determine whether A,T models phi for some team T of size k. We show several results on the complexity of this problem for dependence, independence, and inclusion logic formulas. Moreover, we also relate the complexity of weighted team definability to the complexity classes in the well-known W-hierarchy as well as paraNP.

An automated, geometry-based method for the analysis of hippocampal thickness

from arXiv: Computational Geometry

Authors: Kersten Diers, Hannah Baumeister, Frank Jessen, Emrah Düzel, David Berron, Martin Reuter

The hippocampus is one of the most studied neuroanatomical structures due to its involvement in attention, learning, and memory as well as its atrophy in ageing, neurological, and psychiatric diseases. Hippocampal shape changes, however, are complex and cannot be fully characterized by a single summary metric such as hippocampal volume as determined from MR images. In this work, we propose an automated, geometry-based approach for the unfolding, point-wise correspondence, and local analysis of hippocampal shape features such as thickness and curvature. Starting from an automated segmentation of hippocampal subfields, we create a 3D tetrahedral mesh model as well as a 3D intrinsic coordinate system of the hippocampal body. From this coordinate system, we derive local curvature and thickness estimates as well as a 2D sheet for hippocampal unfolding. We evaluate the performance of our algorithm with a series of experiments to quantify neurodegenerative changes in Mild Cognitive Impairment and Alzheimer's disease dementia. We find that hippocampal thickness estimates detect known differences between clinical groups and can determine the location of these effects on the hippocampal sheet. Further, thickness estimates improve classification of clinical groups and cognitively unimpaired controls when added as an additional predictor. Comparable results are obtained with different datasets and segmentation algorithms. Taken together, we replicate canonical findings on hippocampal volume/shape changes in dementia, extend them by gaining insight into their spatial localization on the hippocampal sheet, and provide additional, complementary information beyond traditional measures. We provide a new set of sensitive processing and analysis tools for the analysis of hippocampal geometry that allows comparisons across studies without relying on image registration or requiring manual intervention.

Authors: Kersten Diers, Hannah Baumeister, Frank Jessen, Emrah Düzel, David Berron, Martin Reuter

The hippocampus is one of the most studied neuroanatomical structures due to its involvement in attention, learning, and memory as well as its atrophy in ageing, neurological, and psychiatric diseases. Hippocampal shape changes, however, are complex and cannot be fully characterized by a single summary metric such as hippocampal volume as determined from MR images. In this work, we propose an automated, geometry-based approach for the unfolding, point-wise correspondence, and local analysis of hippocampal shape features such as thickness and curvature. Starting from an automated segmentation of hippocampal subfields, we create a 3D tetrahedral mesh model as well as a 3D intrinsic coordinate system of the hippocampal body. From this coordinate system, we derive local curvature and thickness estimates as well as a 2D sheet for hippocampal unfolding. We evaluate the performance of our algorithm with a series of experiments to quantify neurodegenerative changes in Mild Cognitive Impairment and Alzheimer's disease dementia. We find that hippocampal thickness estimates detect known differences between clinical groups and can determine the location of these effects on the hippocampal sheet. Further, thickness estimates improve classification of clinical groups and cognitively unimpaired controls when added as an additional predictor. Comparable results are obtained with different datasets and segmentation algorithms. Taken together, we replicate canonical findings on hippocampal volume/shape changes in dementia, extend them by gaining insight into their spatial localization on the hippocampal sheet, and provide additional, complementary information beyond traditional measures. We provide a new set of sensitive processing and analysis tools for the analysis of hippocampal geometry that allows comparisons across studies without relying on image registration or requiring manual intervention.

Faster maximal clique enumeration in large real-world link streams

from arXiv: Data Structures and Algorithms

Authors: Alexis Baudin, Clémence Magnien, Lionel Tabourier

Link streams offer a good model for representing interactions over time. They consist of links $(b,e,u,v)$, where $u$ and $v$ are vertices interacting during the whole time interval $[b,e]$. In this paper, we deal with the problem of enumerating maximal cliques in link streams. A clique is a pair $(C,[t_0,t_1])$, where $C$ is a set of vertices that all interact pairwise during the full interval $[t_0,t_1]$. It is maximal when neither its set of vertices nor its time interval can be increased. Some of the main works solving this problem are based on the famous Bron-Kerbosch algorithm for enumerating maximal cliques in graphs. We take this idea as a starting point to propose a new algorithm which matches the cliques of the instantaneous graphs formed by links existing at a given time $t$ to the maximal cliques of the link stream. We prove its validity and compute its complexity, which is better than the state-of-the art ones in many cases of interest. We also study the output-sensitive complexity, which is close to the output size, thereby showing that our algorithm is efficient. To confirm this, we perform experiments on link streams used in the state of the art, and on massive link streams, up to 100 million links. In all cases our algorithm is faster, mostly by a factor of at least 10 and up to a factor of $10^4$. Moreover, it scales to massive link streams for which the existing algorithms are not able to provide the solution.

Authors: Alexis Baudin, Clémence Magnien, Lionel Tabourier

Link streams offer a good model for representing interactions over time. They consist of links $(b,e,u,v)$, where $u$ and $v$ are vertices interacting during the whole time interval $[b,e]$. In this paper, we deal with the problem of enumerating maximal cliques in link streams. A clique is a pair $(C,[t_0,t_1])$, where $C$ is a set of vertices that all interact pairwise during the full interval $[t_0,t_1]$. It is maximal when neither its set of vertices nor its time interval can be increased. Some of the main works solving this problem are based on the famous Bron-Kerbosch algorithm for enumerating maximal cliques in graphs. We take this idea as a starting point to propose a new algorithm which matches the cliques of the instantaneous graphs formed by links existing at a given time $t$ to the maximal cliques of the link stream. We prove its validity and compute its complexity, which is better than the state-of-the art ones in many cases of interest. We also study the output-sensitive complexity, which is close to the output size, thereby showing that our algorithm is efficient. To confirm this, we perform experiments on link streams used in the state of the art, and on massive link streams, up to 100 million links. In all cases our algorithm is faster, mostly by a factor of at least 10 and up to a factor of $10^4$. Moreover, it scales to massive link streams for which the existing algorithms are not able to provide the solution.

On the Within-Group Discrimination of Screening Classifiers

from arXiv: Data Structures and Algorithms

Authors: Nastaran Okati, Stratis Tsirtsis, Manuel Gomez Rodriguez

Screening classifiers are increasingly used to identify qualified candidates in a variety of selection processes. In this context, it has been recently shown that, if a classifier is calibrated, one can identify the smallest set of candidates which contains, in expectation, a desired number of qualified candidates using a threshold decision rule. This lends support to focusing on calibration as the only requirement for screening classifiers. In this paper, we argue that screening policies that use calibrated classifiers may suffer from an understudied type of within-group discrimination -- they may discriminate against qualified members within demographic groups of interest. Further, we argue that this type of discrimination can be avoided if classifiers satisfy within-group monotonicity, a natural monotonicity property within each of the groups. Then, we introduce an efficient post-processing algorithm based on dynamic programming to minimally modify a given calibrated classifier so that its probability estimates satisfy within-group monotonicity. We validate our algorithm using US Census survey data and show that within-group monotonicity can be often achieved at a small cost in terms of prediction granularity and shortlist size.

Authors: Nastaran Okati, Stratis Tsirtsis, Manuel Gomez Rodriguez

Screening classifiers are increasingly used to identify qualified candidates in a variety of selection processes. In this context, it has been recently shown that, if a classifier is calibrated, one can identify the smallest set of candidates which contains, in expectation, a desired number of qualified candidates using a threshold decision rule. This lends support to focusing on calibration as the only requirement for screening classifiers. In this paper, we argue that screening policies that use calibrated classifiers may suffer from an understudied type of within-group discrimination -- they may discriminate against qualified members within demographic groups of interest. Further, we argue that this type of discrimination can be avoided if classifiers satisfy within-group monotonicity, a natural monotonicity property within each of the groups. Then, we introduce an efficient post-processing algorithm based on dynamic programming to minimally modify a given calibrated classifier so that its probability estimates satisfy within-group monotonicity. We validate our algorithm using US Census survey data and show that within-group monotonicity can be often achieved at a small cost in terms of prediction granularity and shortlist size.

Differentially-Private Hierarchical Clustering with Provable Approximation Guarantees

from arXiv: Data Structures and Algorithms

Authors: Jacob Imola, Alessandro Epasto, Mohammad Mahdian, Vincent Cohen-Addad, Vahab Mirrokni

Hierarchical Clustering is a popular unsupervised machine learning method with decades of history and numerous applications. We initiate the study of differentially private approximation algorithms for hierarchical clustering under the rigorous framework introduced by (Dasgupta, 2016). We show strong lower bounds for the problem: that any $\epsilon$-DP algorithm must exhibit $O(|V|^2/ \epsilon)$-additive error for an input dataset $V$. Then, we exhibit a polynomial-time approximation algorithm with $O(|V|^{2.5}/ \epsilon)$-additive error, and an exponential-time algorithm that meets the lower bound. To overcome the lower bound, we focus on the stochastic block model, a popular model of graphs, and, with a separation assumption on the blocks, propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly. Finally, we perform an empirical study of our algorithms and validate their performance.

Authors: Jacob Imola, Alessandro Epasto, Mohammad Mahdian, Vincent Cohen-Addad, Vahab Mirrokni

Hierarchical Clustering is a popular unsupervised machine learning method with decades of history and numerous applications. We initiate the study of differentially private approximation algorithms for hierarchical clustering under the rigorous framework introduced by (Dasgupta, 2016). We show strong lower bounds for the problem: that any $\epsilon$-DP algorithm must exhibit $O(|V|^2/ \epsilon)$-additive error for an input dataset $V$. Then, we exhibit a polynomial-time approximation algorithm with $O(|V|^{2.5}/ \epsilon)$-additive error, and an exponential-time algorithm that meets the lower bound. To overcome the lower bound, we focus on the stochastic block model, a popular model of graphs, and, with a separation assumption on the blocks, propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly. Finally, we perform an empirical study of our algorithms and validate their performance.

Exploring Wedges of an Oriented Grid by an Automaton with Pebbles

from arXiv: Data Structures and Algorithms

Authors: Subhash Bhagat, Andrzej Pelc

A mobile agent, modeled as a deterministic finite automaton, navigates in the infinite anonymous oriented grid $\mathbb{Z} \times \mathbb{Z}$. It has to explore a given infinite subgraph of the grid by visiting all of its nodes. We focus on the simplest subgraphs, called {\em wedges}, spanned by all nodes of the grid located between two half-lines in the plane, with a common origin. Many wedges turn out to be impossible to explore by an automaton that cannot mark nodes of the grid. Hence, we study the following question: Given a wedge $W$, what is the smallest number $p$ of (movable) pebbles for which there exists an automaton that can explore $W$ using $p$ pebbles? Our main contribution is a complete solution of this problem. For each wedge $W$ we determine this minimum number $p$, show an automaton that explores it using $p$ pebbles and show that fewer pebbles are not enough. We show that this smallest number of pebbles can vary from 0 to 3, depending on the angle between half-lines limiting the wedge and depending on whether the automaton can cross these half-lines or not.

Authors: Subhash Bhagat, Andrzej Pelc

A mobile agent, modeled as a deterministic finite automaton, navigates in the infinite anonymous oriented grid $\mathbb{Z} \times \mathbb{Z}$. It has to explore a given infinite subgraph of the grid by visiting all of its nodes. We focus on the simplest subgraphs, called {\em wedges}, spanned by all nodes of the grid located between two half-lines in the plane, with a common origin. Many wedges turn out to be impossible to explore by an automaton that cannot mark nodes of the grid. Hence, we study the following question: Given a wedge $W$, what is the smallest number $p$ of (movable) pebbles for which there exists an automaton that can explore $W$ using $p$ pebbles? Our main contribution is a complete solution of this problem. For each wedge $W$ we determine this minimum number $p$, show an automaton that explores it using $p$ pebbles and show that fewer pebbles are not enough. We show that this smallest number of pebbles can vary from 0 to 3, depending on the angle between half-lines limiting the wedge and depending on whether the automaton can cross these half-lines or not.

Adding an Edge in a $P_4$-sparse Graph

from arXiv: Data Structures and Algorithms

Authors: Anna Mpanti, Stavros D. Nikolopoulos, Leonidas Palios

The minimum completion (fill-in) problem is defined as follows: Given a graph family $\mathcal{F}$ (more generally, a property $\Pi$) and a graph $G$, the completion problem asks for the minimum number of non-edges needed to be added to $G$ so that the resulting graph belongs to the graph family $\mathcal{F}$ (or has property $\Pi$). This problem is NP-complete for many subclasses of perfect graphs and polynomial solutions are available only for minimal completion sets. We study the minimum completion problem of a $P_4$-sparse graph $G$ with an added edge. For any optimal solution of the problem, we prove that there is an optimal solution whose form is of one of a small number of possibilities. This along with the solution of the problem when the added edge connects two non-adjacent vertices of a spider or connects two vertices in different connected components of the graph enables us to present a polynomial-time algorithm for the problem.

Authors: Anna Mpanti, Stavros D. Nikolopoulos, Leonidas Palios

The minimum completion (fill-in) problem is defined as follows: Given a graph family $\mathcal{F}$ (more generally, a property $\Pi$) and a graph $G$, the completion problem asks for the minimum number of non-edges needed to be added to $G$ so that the resulting graph belongs to the graph family $\mathcal{F}$ (or has property $\Pi$). This problem is NP-complete for many subclasses of perfect graphs and polynomial solutions are available only for minimal completion sets. We study the minimum completion problem of a $P_4$-sparse graph $G$ with an added edge. For any optimal solution of the problem, we prove that there is an optimal solution whose form is of one of a small number of possibilities. This along with the solution of the problem when the added edge connects two non-adjacent vertices of a spider or connects two vertices in different connected components of the graph enables us to present a polynomial-time algorithm for the problem.

Sublinear Approximation Schemes for Scheduling Precedence Graphs of Bounded Depth

from arXiv: Data Structures and Algorithms

Authors: Bin Fu, Yumei Huo, Hairong Zhao

We study the classical scheduling problem on parallel machines %with precedence constraints where the precedence graph has the bounded depth $h$. Our goal is to minimize the maximum completion time. We focus on developing approximation algorithms that use only sublinear space or sublinear time. We develop the first one-pass streaming approximation schemes using sublinear space when all jobs' processing times differ no more than a constant factor $c$ and the number of machines $m$ is at most $\tfrac {2n \epsilon}{3 h c }$. This is so far the best approximation we can have in terms of $m$, since no polynomial time approximation better than $\tfrac{4}{3}$ exists when $m = \tfrac{n}{3}$ unless P=NP. %the problem cannot be approximated within a factor of $\tfrac{4}{3}$ when $m = \tfrac{n}{3}$ even if all jobs have equal processing time. The algorithms are then extended to the more general problem where the largest $\alpha n$ jobs have no more than $c$ factor difference. % for some constant $0 < \alpha \le 1$. We also develop the first sublinear time algorithms for both problems. For the more general problem, when $ m \le \tfrac { \alpha n \epsilon}{20 c^2 \cdot h } $, our algorithm is a randomized $(1+\epsilon)$-approximation scheme that runs in sublinear time. This work not only provides an algorithmic solution to the studied problem under big data % and cloud computing environment, but also gives a methodological framework for designing sublinear approximation algorithms for other scheduling problems.

Authors: Bin Fu, Yumei Huo, Hairong Zhao

We study the classical scheduling problem on parallel machines %with precedence constraints where the precedence graph has the bounded depth $h$. Our goal is to minimize the maximum completion time. We focus on developing approximation algorithms that use only sublinear space or sublinear time. We develop the first one-pass streaming approximation schemes using sublinear space when all jobs' processing times differ no more than a constant factor $c$ and the number of machines $m$ is at most $\tfrac {2n \epsilon}{3 h c }$. This is so far the best approximation we can have in terms of $m$, since no polynomial time approximation better than $\tfrac{4}{3}$ exists when $m = \tfrac{n}{3}$ unless P=NP. %the problem cannot be approximated within a factor of $\tfrac{4}{3}$ when $m = \tfrac{n}{3}$ even if all jobs have equal processing time. The algorithms are then extended to the more general problem where the largest $\alpha n$ jobs have no more than $c$ factor difference. % for some constant $0 < \alpha \le 1$. We also develop the first sublinear time algorithms for both problems. For the more general problem, when $ m \le \tfrac { \alpha n \epsilon}{20 c^2 \cdot h } $, our algorithm is a randomized $(1+\epsilon)$-approximation scheme that runs in sublinear time. This work not only provides an algorithmic solution to the studied problem under big data % and cloud computing environment, but also gives a methodological framework for designing sublinear approximation algorithms for other scheduling problems.

Durable Algorithms for Writable LL/SC and CAS with Dynamic Joining

from arXiv: Data Structures and Algorithms

Authors: Prasad Jayanti, Siddhartha Jayanti, Sucharita Jayanti

We present durable implementations for two well known universal primitives -- CAS (compare-and-swap), and its ABA-free counter-part LLSC (load-linked, store-conditional). All our implementations are: writable, meaning they support a Write() operation; have constant time complexity per operation; allow for dynamic joining, meaning newly created processes (a.k.a. threads) of arbitrary names can join a protocol and access our implementations; and have adaptive space complexities, meaning the space use scales in the number of processes $n$ that actually use the objects, as opposed to previous protocols which are designed for a maximum number of processes $N$. Our durable Writable-CAS implementation, DuraCAS, requires $O(m + n)$ space to support $m$ objects that get accessed by $n$ processes, improving on the state-of-the-art $O(m + N^2)$. By definition, LLSC objects must store "contexts" in addition to object values. Our Writable-LLSC implementation, DuraLL, requires $O(m + n + C)$ space, where $C$ is the number of "contexts" stored across all the objects. While LLSC has an advantage over CAS due to being ABA-free, the object definition seems to require additional space usage. To address this trade-off, we define an External Context (EC) variant of LLSC. Our EC Writable-LLSC implementation is ABA-free and has a space complexity of just $O(m + n)$.

To our knowledge, we are the first to present durable CAS algorithms that allow for dynamic joining, and our algorithms are the first to exhibit adaptive space complexities. To our knowledge, we are the first to implement any type of durable LLSC objects.

Authors: Prasad Jayanti, Siddhartha Jayanti, Sucharita Jayanti

We present durable implementations for two well known universal primitives -- CAS (compare-and-swap), and its ABA-free counter-part LLSC (load-linked, store-conditional). All our implementations are: writable, meaning they support a Write() operation; have constant time complexity per operation; allow for dynamic joining, meaning newly created processes (a.k.a. threads) of arbitrary names can join a protocol and access our implementations; and have adaptive space complexities, meaning the space use scales in the number of processes $n$ that actually use the objects, as opposed to previous protocols which are designed for a maximum number of processes $N$. Our durable Writable-CAS implementation, DuraCAS, requires $O(m + n)$ space to support $m$ objects that get accessed by $n$ processes, improving on the state-of-the-art $O(m + N^2)$. By definition, LLSC objects must store "contexts" in addition to object values. Our Writable-LLSC implementation, DuraLL, requires $O(m + n + C)$ space, where $C$ is the number of "contexts" stored across all the objects. While LLSC has an advantage over CAS due to being ABA-free, the object definition seems to require additional space usage. To address this trade-off, we define an External Context (EC) variant of LLSC. Our EC Writable-LLSC implementation is ABA-free and has a space complexity of just $O(m + n)$.

To our knowledge, we are the first to present durable CAS algorithms that allow for dynamic joining, and our algorithms are the first to exhibit adaptive space complexities. To our knowledge, we are the first to implement any type of durable LLSC objects.

Approximating Red-Blue Set Cover

from arXiv: Data Structures and Algorithms

Authors: Eden Chlamtáč, Yury Makarychev, Ali Vakilian

We provide a new approximation algorithm for the Red-Blue Set Cover problem and give a new hardness result. Our approximation algorithm achieves $\tilde O(m^{1/3})$-approximation improving on the $\tilde O(m^{1/2})$-approximation due to Elkin and Peleg (where $m$ is the number of sets). Additionally, we provide a nearly approximation preserving reduction from Min $k$-Union to Red-Blue Set Cover that gives an $\tilde\Omega(m^{1/4 - \varepsilon})$ hardness under the Dense-vs-Random conjecture.

Authors: Eden Chlamtáč, Yury Makarychev, Ali Vakilian

We provide a new approximation algorithm for the Red-Blue Set Cover problem and give a new hardness result. Our approximation algorithm achieves $\tilde O(m^{1/3})$-approximation improving on the $\tilde O(m^{1/2})$-approximation due to Elkin and Peleg (where $m$ is the number of sets). Additionally, we provide a nearly approximation preserving reduction from Min $k$-Union to Red-Blue Set Cover that gives an $\tilde\Omega(m^{1/4 - \varepsilon})$ hardness under the Dense-vs-Random conjecture.

A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee

from arXiv: Data Structures and Algorithms

Authors: Zhao Song, Mingquan Ye, Junze Yin, Lichen Zhang

Given a matrix $A\in \mathbb{R}^{n\times d}$ and a vector $b\in \mathbb{R}^n$, we consider the regression problem with $\ell_\infty$ guarantees: finding a vector $x'\in \mathbb{R}^d$ such that $ \|x'-x^*\|_\infty \leq \frac{\epsilon}{\sqrt{d}}\cdot \|Ax^*-b\|_2\cdot \|A^\dagger\|$ where $x^*=\arg\min_{x\in \mathbb{R}^d}\|Ax-b\|_2$. One popular approach for solving such $\ell_2$ regression problem is via sketching: picking a structured random matrix $S\in \mathbb{R}^{m\times n}$ with $m\ll n$ and $SA$ can be quickly computed, solve the ``sketched'' regression problem $\arg\min_{x\in \mathbb{R}^d} \|SAx-Sb\|_2$. In this paper, we show that in order to obtain such $\ell_\infty$ guarantee for $\ell_2$ regression, one has to use sketching matrices that are dense. To the best of our knowledge, this is the first user case in which dense sketching matrices are necessary. On the algorithmic side, we prove that there exists a distribution of dense sketching matrices with $m=\epsilon^{-2}d\log^3(n/\delta)$ such that solving the sketched regression problem gives the $\ell_\infty$ guarantee, with probability at least $1-\delta$. Moreover, the matrix $SA$ can be computed in time $O(nd\log n)$. Our row count is nearly-optimal up to logarithmic factors, and significantly improves the result in [Price, Song and Woodruff, ICALP'17], in which a super-linear in $d$ rows, $m=\Omega(\epsilon^{-2}d^{1+\gamma})$ for $\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})$ is required. We also develop a novel analytical framework for $\ell_\infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property introduced in [Song and Yu, ICML'21]. Our analysis is arguably much simpler and more general than [Price, Song and Woodruff, ICALP'17], and it extends to dense sketches for tensor product of vectors.

Authors: Zhao Song, Mingquan Ye, Junze Yin, Lichen Zhang

Given a matrix $A\in \mathbb{R}^{n\times d}$ and a vector $b\in \mathbb{R}^n$, we consider the regression problem with $\ell_\infty$ guarantees: finding a vector $x'\in \mathbb{R}^d$ such that $ \|x'-x^*\|_\infty \leq \frac{\epsilon}{\sqrt{d}}\cdot \|Ax^*-b\|_2\cdot \|A^\dagger\|$ where $x^*=\arg\min_{x\in \mathbb{R}^d}\|Ax-b\|_2$. One popular approach for solving such $\ell_2$ regression problem is via sketching: picking a structured random matrix $S\in \mathbb{R}^{m\times n}$ with $m\ll n$ and $SA$ can be quickly computed, solve the ``sketched'' regression problem $\arg\min_{x\in \mathbb{R}^d} \|SAx-Sb\|_2$. In this paper, we show that in order to obtain such $\ell_\infty$ guarantee for $\ell_2$ regression, one has to use sketching matrices that are dense. To the best of our knowledge, this is the first user case in which dense sketching matrices are necessary. On the algorithmic side, we prove that there exists a distribution of dense sketching matrices with $m=\epsilon^{-2}d\log^3(n/\delta)$ such that solving the sketched regression problem gives the $\ell_\infty$ guarantee, with probability at least $1-\delta$. Moreover, the matrix $SA$ can be computed in time $O(nd\log n)$. Our row count is nearly-optimal up to logarithmic factors, and significantly improves the result in [Price, Song and Woodruff, ICALP'17], in which a super-linear in $d$ rows, $m=\Omega(\epsilon^{-2}d^{1+\gamma})$ for $\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})$ is required. We also develop a novel analytical framework for $\ell_\infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property introduced in [Song and Yu, ICML'21]. Our analysis is arguably much simpler and more general than [Price, Song and Woodruff, ICALP'17], and it extends to dense sketches for tensor product of vectors.

Flip-width: Cops and Robber on dense graphs

from arXiv: Data Structures and Algorithms

Authors: Symon Toruńczyk

We define new graph parameters that generalize tree-width, degeneracy, and generalized coloring numbers for sparse graphs, and clique-width and twin-width for dense graphs. Those parameters are defined using variants of the Cops and Robber game, in which the robber has speed bounded by a fixed constant $r\in\mathbb N\cup\{\infty\}$, and the cops perform flips (or perturbations) of the considered graph. We propose a new notion of tameness of a graph class, called bounded flip-width, which is a dense counterpart of classes of bounded expansion of Ne\v{s}et\v{r}il and Ossona de Mendez, and includes classes of bounded twin-width of Bonnet, Kim, Thomass\'e and Watrigant. We prove that boundedness of flip-width is preserved by first-order interpretations, or transductions, generalizing previous results concerning classes of bounded expansion and bounded twin-width. We provide an algorithm approximating the flip-width of a given graph, which runs in slicewise polynomial time (XP) in the size of the graph. We also propose a more general notion of tameness, called almost bounded flip-width, which is a dense counterpart of nowhere dense classes, and includes all structurally nowhere dense classes. We conjecture, and provide evidence, that classes with almost bounded flip-width coincide with monadically dependent classes, introduced by Shelah in model theory.

Authors: Symon Toruńczyk

We define new graph parameters that generalize tree-width, degeneracy, and generalized coloring numbers for sparse graphs, and clique-width and twin-width for dense graphs. Those parameters are defined using variants of the Cops and Robber game, in which the robber has speed bounded by a fixed constant $r\in\mathbb N\cup\{\infty\}$, and the cops perform flips (or perturbations) of the considered graph. We propose a new notion of tameness of a graph class, called bounded flip-width, which is a dense counterpart of classes of bounded expansion of Ne\v{s}et\v{r}il and Ossona de Mendez, and includes classes of bounded twin-width of Bonnet, Kim, Thomass\'e and Watrigant. We prove that boundedness of flip-width is preserved by first-order interpretations, or transductions, generalizing previous results concerning classes of bounded expansion and bounded twin-width. We provide an algorithm approximating the flip-width of a given graph, which runs in slicewise polynomial time (XP) in the size of the graph. We also propose a more general notion of tameness, called almost bounded flip-width, which is a dense counterpart of nowhere dense classes, and includes all structurally nowhere dense classes. We conjecture, and provide evidence, that classes with almost bounded flip-width coincide with monadically dependent classes, introduced by Shelah in model theory.

Improved Exact and Heuristic Algorithms for Maximum Weight Clique

from arXiv: Data Structures and Algorithms

Authors: Roman Erhardt, Kathrin Hanauer, Nils Kriege, Christian Schulz, Darren Strash

We propose improved exact and heuristic algorithms for solving the maximum weight clique problem, a well-known problem in graph theory with many applications. Our algorithms interleave successful techniques from related work with novel data reduction rules that use local graph structure to identify and remove vertices and edges while retaining the optimal solution. We evaluate our algorithms on a range of synthetic and real-world graphs, and find that they outperform the current state of the art on most inputs. Our data reductions always produce smaller reduced graphs than existing data reductions alone. As a result, our exact algorithm, MWCRedu, finds solutions orders of magnitude faster on naturally weighted, medium-sized map labeling graphs and random hyperbolic graphs. Our heuristic algorithm, MWCPeel, outperforms its competitors on these instances, but is slightly less effective on extremely dense or large instances.

Authors: Roman Erhardt, Kathrin Hanauer, Nils Kriege, Christian Schulz, Darren Strash

We propose improved exact and heuristic algorithms for solving the maximum weight clique problem, a well-known problem in graph theory with many applications. Our algorithms interleave successful techniques from related work with novel data reduction rules that use local graph structure to identify and remove vertices and edges while retaining the optimal solution. We evaluate our algorithms on a range of synthetic and real-world graphs, and find that they outperform the current state of the art on most inputs. Our data reductions always produce smaller reduced graphs than existing data reductions alone. As a result, our exact algorithm, MWCRedu, finds solutions orders of magnitude faster on naturally weighted, medium-sized map labeling graphs and random hyperbolic graphs. Our heuristic algorithm, MWCPeel, outperforms its competitors on these instances, but is slightly less effective on extremely dense or large instances.

The Investment Management Game: Extending the Scope of the Notion of Core

from arXiv: Data Structures and Algorithms

Authors: Vijay V. Vazirani

The core is a dominant solution concept in economics and game theory. In this context, the following question arises, ``How versatile is this solution concept?'' We note that within game theory, this notion has been used for profit -- equivalently, cost or utility -- sharing only. In this paper, we show a completely different use for it: in an {\em investment management game}, under which an agent needs to allocate her money among investment firms in such a way that {\em in each of exponentially many future scenarios}, sufficient money is available in the ``right'' firms so she can buy an ``optimal investment'' for that scenario.

We study a restriction of this game to {\em perfect graphs} and characterize its core. Our characterization is analogous to Shapley and Shubik's characterization of the core of the assignment game. The difference is the following: whereas their characterization follows from {\em total unimodularity}, ours follows from {\em total dual integrality}. The latter is another novelty of our work.

Authors: Vijay V. Vazirani

The core is a dominant solution concept in economics and game theory. In this context, the following question arises, ``How versatile is this solution concept?'' We note that within game theory, this notion has been used for profit -- equivalently, cost or utility -- sharing only. In this paper, we show a completely different use for it: in an {\em investment management game}, under which an agent needs to allocate her money among investment firms in such a way that {\em in each of exponentially many future scenarios}, sufficient money is available in the ``right'' firms so she can buy an ``optimal investment'' for that scenario.

We study a restriction of this game to {\em perfect graphs} and characterize its core. Our characterization is analogous to Shapley and Shubik's characterization of the core of the assignment game. The difference is the following: whereas their characterization follows from {\em total unimodularity}, ours follows from {\em total dual integrality}. The latter is another novelty of our work.

Parameterized Algorithms for Colored Clustering

from arXiv: Data Structures and Algorithms

Authors: Leon Kellerhals, Tomohiro Koana, Pascal Kunz, Rolf Niedermeier

In the Colored Clustering problem, one is asked to cluster edge-colored (hyper-)graphs whose colors represent interaction types. More specifically, the goal is to select as many edges as possible without choosing two edges that share an endpoint and are colored differently. Equivalently, the goal can also be described as assigning colors to the vertices in a way that fits the edge-coloring as well as possible. As this problem is NP-hard, we build on previous work by studying its parameterized complexity. We give a $2^{\mathcal O(k)} \cdot n^{\mathcal O(1)}$-time algorithm where $k$ is the number of edges to be selected and $n$ the number of vertices. We also prove the existence of a problem kernel of size $\mathcal O(k^{5/2} )$, resolving an open problem posed in the literature. We consider parameters that are smaller than $k$, the number of edges to be selected, and $r$, the number of edges that can be deleted. Such smaller parameters are obtained by considering the difference between $k$ or $r$ and some lower bound on these values. We give both algorithms and lower bounds for Colored Clustering with such parameterizations. Finally, we settle the parameterized complexity of Colored Clustering with respect to structural graph parameters by showing that it is $W[1]$-hard with respect to both vertex cover number and tree-cut width, but fixed-parameter tractable with respect to slim tree-cut width.

Authors: Leon Kellerhals, Tomohiro Koana, Pascal Kunz, Rolf Niedermeier

In the Colored Clustering problem, one is asked to cluster edge-colored (hyper-)graphs whose colors represent interaction types. More specifically, the goal is to select as many edges as possible without choosing two edges that share an endpoint and are colored differently. Equivalently, the goal can also be described as assigning colors to the vertices in a way that fits the edge-coloring as well as possible. As this problem is NP-hard, we build on previous work by studying its parameterized complexity. We give a $2^{\mathcal O(k)} \cdot n^{\mathcal O(1)}$-time algorithm where $k$ is the number of edges to be selected and $n$ the number of vertices. We also prove the existence of a problem kernel of size $\mathcal O(k^{5/2} )$, resolving an open problem posed in the literature. We consider parameters that are smaller than $k$, the number of edges to be selected, and $r$, the number of edges that can be deleted. Such smaller parameters are obtained by considering the difference between $k$ or $r$ and some lower bound on these values. We give both algorithms and lower bounds for Colored Clustering with such parameterizations. Finally, we settle the parameterized complexity of Colored Clustering with respect to structural graph parameters by showing that it is $W[1]$-hard with respect to both vertex cover number and tree-cut width, but fixed-parameter tractable with respect to slim tree-cut width.

Adding a Tail in Classes of Perfect Graphs

from arXiv: Data Structures and Algorithms

Authors: Anna Mpanti, Stavros D. Nikolopoulos, Leonidas Palios

Consider a graph $G$ which belongs to a graph class ${\cal C}$. We are interested in connecting a node $w \not\in V(G)$ to $G$ by a single edge $u w$ where $u \in V(G)$; we call such an edge a \emph{tail}. As the graph resulting from $G$ after the addition of the tail, denoted $G+uw$, need not belong to the class ${\cal C}$, we want to compute a minimum ${\cal C}$-completion of $G+w$, i.e., the minimum number of non-edges (excluding the tail $u w$) to be added to $G+uw$ so that the resulting graph belongs to ${\cal C}$. In this paper, we study this problem for the classes of split, quasi-threshold, threshold, and $P_4$-sparse graphs and we present linear-time algorithms by exploiting the structure of split graphs and the tree representation of quasi-threshold, threshold, and $P_4$-sparse graphs.

Authors: Anna Mpanti, Stavros D. Nikolopoulos, Leonidas Palios

Consider a graph $G$ which belongs to a graph class ${\cal C}$. We are interested in connecting a node $w \not\in V(G)$ to $G$ by a single edge $u w$ where $u \in V(G)$; we call such an edge a \emph{tail}. As the graph resulting from $G$ after the addition of the tail, denoted $G+uw$, need not belong to the class ${\cal C}$, we want to compute a minimum ${\cal C}$-completion of $G+w$, i.e., the minimum number of non-edges (excluding the tail $u w$) to be added to $G+uw$ so that the resulting graph belongs to ${\cal C}$. In this paper, we study this problem for the classes of split, quasi-threshold, threshold, and $P_4$-sparse graphs and we present linear-time algorithms by exploiting the structure of split graphs and the tree representation of quasi-threshold, threshold, and $P_4$-sparse graphs.

Wednesday, February 01

Near-perfect Reachability of Variational Quantum Search with Depth-1 Ansatz

from arXiv: Computational Complexity

Authors: Junpeng Zhan

Grover's search algorithm is renowned for its dramatic speedup in solving many important scientific problems. The recently proposed Variational Quantum Search (VQS) algorithm has shown an exponential advantage over Grover's algorithm for up to 26 qubits. However, its advantage for larger numbers of qubits has not yet been proven. Here we show that the exponentially deep circuit required by Grover's algorithm can be replaced by a multi-controlled NOT gate together with either a single layer of Ry gates or two layers of circuits consisting of Hadamard and NOT gates, which is valid for any number of qubits greater than five. We prove that the VQS, with a single layer of Ry gates as its Ansatz, has near-perfect reachability in finding the good element of an arbitrarily large unstructured data set, and its reachability exponentially improves with the number of qubits, where the reachability is defined to quantify the ability of a given Ansatz to generate an optimal quantum state. Numerical studies further validate the excellent reachability of the VQS. Proving the near-perfect reachability of the VQS, with a depth-1 Ansatz, for any number of qubits completes an essential step in proving its exponential advantage over Grover's algorithm for any number of qubits, and the latter proving is significant as it means that the VQS can efficiently solve NP-complete problems.

Authors: Junpeng Zhan

Grover's search algorithm is renowned for its dramatic speedup in solving many important scientific problems. The recently proposed Variational Quantum Search (VQS) algorithm has shown an exponential advantage over Grover's algorithm for up to 26 qubits. However, its advantage for larger numbers of qubits has not yet been proven. Here we show that the exponentially deep circuit required by Grover's algorithm can be replaced by a multi-controlled NOT gate together with either a single layer of Ry gates or two layers of circuits consisting of Hadamard and NOT gates, which is valid for any number of qubits greater than five. We prove that the VQS, with a single layer of Ry gates as its Ansatz, has near-perfect reachability in finding the good element of an arbitrarily large unstructured data set, and its reachability exponentially improves with the number of qubits, where the reachability is defined to quantify the ability of a given Ansatz to generate an optimal quantum state. Numerical studies further validate the excellent reachability of the VQS. Proving the near-perfect reachability of the VQS, with a depth-1 Ansatz, for any number of qubits completes an essential step in proving its exponential advantage over Grover's algorithm for any number of qubits, and the latter proving is significant as it means that the VQS can efficiently solve NP-complete problems.

A Finer Analysis of Multi-Structural Games and Beyond

from arXiv: Computational Complexity

Authors: Marco Carmosino, Ronald Fagin, Neil Immerman, Phokion Kolaitis, Jonathan Lenchner, Rik Sengupta

Multi-structural (MS) games are combinatorial games that capture the number of quantifiers of first-order sentences. On the face of their definition, MS games differ from Ehrenfeucht-Fraisse (EF) games in two ways: first, MS games are played on two sets of structures, while EF games are played on a pair of structures; second, in MS games, Duplicator can make any number of copies of structures. In the first part of this paper, we perform a finer analysis of MS games and develop a closer comparison of MS games with EF games. In particular, we point out that the use of sets of structures is of the essence and that when MS games are played on pairs of structures, they capture Boolean combinations of first-order sentences with a fixed number of quantifiers. After this, we focus on another important difference between MS games and EF games, namely, the necessity for Spoiler to play on top of a previous move in order to win some MS games. Via an analysis of the types realized during MS games, we delineate the expressive power of the variant of MS games in which Spoiler never plays on top of a previous move. In the second part we focus on simultaneously capturing number of quantifiers and number of variables in first-order logic. We show that natural variants of the MS game do not achieve this. We then introduce a new game, the quantifier-variable tree game, and show that it simultaneously captures the number of quantifiers and number of variables.

Authors: Marco Carmosino, Ronald Fagin, Neil Immerman, Phokion Kolaitis, Jonathan Lenchner, Rik Sengupta

Multi-structural (MS) games are combinatorial games that capture the number of quantifiers of first-order sentences. On the face of their definition, MS games differ from Ehrenfeucht-Fraisse (EF) games in two ways: first, MS games are played on two sets of structures, while EF games are played on a pair of structures; second, in MS games, Duplicator can make any number of copies of structures. In the first part of this paper, we perform a finer analysis of MS games and develop a closer comparison of MS games with EF games. In particular, we point out that the use of sets of structures is of the essence and that when MS games are played on pairs of structures, they capture Boolean combinations of first-order sentences with a fixed number of quantifiers. After this, we focus on another important difference between MS games and EF games, namely, the necessity for Spoiler to play on top of a previous move in order to win some MS games. Via an analysis of the types realized during MS games, we delineate the expressive power of the variant of MS games in which Spoiler never plays on top of a previous move. In the second part we focus on simultaneously capturing number of quantifiers and number of variables in first-order logic. We show that natural variants of the MS game do not achieve this. We then introduce a new game, the quantifier-variable tree game, and show that it simultaneously captures the number of quantifiers and number of variables.

Limits of structures and Total NP Search Problems

from arXiv: Computational Complexity

Authors: Ondřej Ježil

For a class of finite graphs, we define a limit object relative to some computationally restricted class of functions. The properties of the limit object then reflect how a computationally restricted viewer "sees" a generic instance from the class. The construction uses Kraj\'i\v{c}ek's forcing with random variables [7]. We prove sufficient conditions for universal and existential sentences to be valid in the limit, provide several examples, and prove that such a limit object can then be expanded to a model of weak arithmetic. We then take the limit of all finite pointed paths to obtain a model of arithmetic where the problem OntoWeakPigeon is total but Leaf (the complete problem for $\textbf{PPA}$) is not. This can be viewed as a logical separation of the oracle classes of total NP search problems, which in our setting implies standard nonreducibility of Leaf to OntoWeakPigeon.

Authors: Ondřej Ježil

For a class of finite graphs, we define a limit object relative to some computationally restricted class of functions. The properties of the limit object then reflect how a computationally restricted viewer "sees" a generic instance from the class. The construction uses Kraj\'i\v{c}ek's forcing with random variables [7]. We prove sufficient conditions for universal and existential sentences to be valid in the limit, provide several examples, and prove that such a limit object can then be expanded to a model of weak arithmetic. We then take the limit of all finite pointed paths to obtain a model of arithmetic where the problem OntoWeakPigeon is total but Leaf (the complete problem for $\textbf{PPA}$) is not. This can be viewed as a logical separation of the oracle classes of total NP search problems, which in our setting implies standard nonreducibility of Leaf to OntoWeakPigeon.

A Survey and Benchmark of Automatic Surface Reconstruction from Point Clouds

from arXiv: Computational Geometry

Authors: Raphael Sulzer, Loic Landrieu, Renaud Marlet, Bruno Vallet

We survey and benchmark traditional and novel learning-based algorithms that address the problem of surface reconstruction from point clouds. Surface reconstruction from point clouds is particularly challenging when applied to real-world acquisitions, due to noise, outliers, non-uniform sampling and missing data. Traditionally, different handcrafted priors of the input points or the output surface have been proposed to make the problem more tractable. However, hyperparameter tuning for adjusting priors to different acquisition defects can be a tedious task. To this end, the deep learning community has recently addressed the surface reconstruction problem. In contrast to traditional approaches, deep surface reconstruction methods can learn priors directly from a training set of point clouds and corresponding true surfaces. In our survey, we detail how different handcrafted and learned priors affect the robustness of methods to defect-laden input and their capability to generate geometric and topologically accurate reconstructions. In our benchmark, we evaluate the reconstructions of several traditional and learning-based methods on the same grounds. We show that learning-based methods can generalize to unseen shape categories, but their training and test sets must share the same point cloud characteristics. We also provide the code and data to compete in our benchmark and to further stimulate the development of learning-based surface reconstruction github.com/raphaelsulzer/dsr-benchmark.

Authors: Raphael Sulzer, Loic Landrieu, Renaud Marlet, Bruno Vallet

We survey and benchmark traditional and novel learning-based algorithms that address the problem of surface reconstruction from point clouds. Surface reconstruction from point clouds is particularly challenging when applied to real-world acquisitions, due to noise, outliers, non-uniform sampling and missing data. Traditionally, different handcrafted priors of the input points or the output surface have been proposed to make the problem more tractable. However, hyperparameter tuning for adjusting priors to different acquisition defects can be a tedious task. To this end, the deep learning community has recently addressed the surface reconstruction problem. In contrast to traditional approaches, deep surface reconstruction methods can learn priors directly from a training set of point clouds and corresponding true surfaces. In our survey, we detail how different handcrafted and learned priors affect the robustness of methods to defect-laden input and their capability to generate geometric and topologically accurate reconstructions. In our benchmark, we evaluate the reconstructions of several traditional and learning-based methods on the same grounds. We show that learning-based methods can generalize to unseen shape categories, but their training and test sets must share the same point cloud characteristics. We also provide the code and data to compete in our benchmark and to further stimulate the development of learning-based surface reconstruction https://github.com/raphaelsulzer/dsr-benchmark.

Singular Value Approximation and Reducing Directed to Undirected Graph Sparsification

from arXiv: Data Structures and Algorithms

Authors: AmirMahdi Ahmadinejad, John Peebles, Edward Pyne, Aaron Sidford, Salil Vadhan

In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call Singular Value (SV) approximation. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs (Spielman Teng STOC 2004), standard approximation for directed graphs (Cohen et. al. STOC 2007), and unit-circle approximation for directed graphs (Ahmadinejad et. al. FOCS 2020). Moreover, SV approximation enjoys several useful properties not known to be possessed by previous notions of approximation, such as being preserved under products of random-walk matrices and with matrices of bounded norm.

Notably, we show that there is a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs. With this reduction in hand, we provide a nearly linear-time algorithm for SV-sparsifying undirected and hence also Eulerian directed graphs. This also yields the first nearly linear-time algorithm for unit-circle-sparsifying Eulerian directed graphs. In addition, we give a nearly linear-time algorithm for SV-sparsifying (and UC-sparsifying) random-walk polynomials of Eulerian directed graphs with second normalized singular value bounded away from $1$ by $1/\text{poly}(n)$.

Finally, we show that a simple repeated-squaring and sparsification algorithm for solving Laplacian systems, introduced by (Peng Spielman STOC 2014) for undirected graphs, also works for Eulerian digraphs whose random-walk matrix is normal (i.e. unitarily diagonalizable), if we use SV-sparsification at each step. Prior Laplacian solvers for Eulerian digraphs are significantly more complicated.

Authors: AmirMahdi Ahmadinejad, John Peebles, Edward Pyne, Aaron Sidford, Salil Vadhan

In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call Singular Value (SV) approximation. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs (Spielman Teng STOC 2004), standard approximation for directed graphs (Cohen et. al. STOC 2007), and unit-circle approximation for directed graphs (Ahmadinejad et. al. FOCS 2020). Moreover, SV approximation enjoys several useful properties not known to be possessed by previous notions of approximation, such as being preserved under products of random-walk matrices and with matrices of bounded norm.

Notably, we show that there is a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs. With this reduction in hand, we provide a nearly linear-time algorithm for SV-sparsifying undirected and hence also Eulerian directed graphs. This also yields the first nearly linear-time algorithm for unit-circle-sparsifying Eulerian directed graphs. In addition, we give a nearly linear-time algorithm for SV-sparsifying (and UC-sparsifying) random-walk polynomials of Eulerian directed graphs with second normalized singular value bounded away from $1$ by $1/\text{poly}(n)$.

Finally, we show that a simple repeated-squaring and sparsification algorithm for solving Laplacian systems, introduced by (Peng Spielman STOC 2014) for undirected graphs, also works for Eulerian digraphs whose random-walk matrix is normal (i.e. unitarily diagonalizable), if we use SV-sparsification at each step. Prior Laplacian solvers for Eulerian digraphs are significantly more complicated.

A Safety Framework for Flow Decomposition Problems via Integer Linear Programming

from arXiv: Data Structures and Algorithms

Authors: Fernando H. C. Dias, Manuel Caceres, Lucia Williams, Brendan Mumey, Alexandru I. Tomescu

Many important problems in Bioinformatics (e.g., assembly or multi-assembly) admit multiple solutions, while the final objective is to report only one. A common approach to deal with this uncertainty is finding safe partial solutions (e.g., contigs) which are common to all solutions. Previous research on safety has focused on polynomially-time solvable problems, whereas many successful and natural models are NP-hard to solve, leaving a lack of "safety tools" for such problems. We propose the first method for computing all safe solutions for an NP-hard problem, minimum flow decomposition. We obtain our results by developing a "safety test" for paths based on a general Integer Linear Programming (ILP) formulation. Moreover, we provide implementations with practical optimizations aimed to reduce the total ILP time, the most efficient of these being based on a recursive group-testing procedure.

Results: Experimental results on the transcriptome datasets of Shao and Kingsford (TCBB, 2017) show that all safe paths for minimum flow decompositions correctly recover up to 90% of the full RNA transcripts, which is at least 25% more than previously known safe paths, such as (Caceres et al. TCBB, 2021), (Zheng et al., RECOMB 2021), (Khan et al., RECOMB 2022, ESA 2022). Moreover, despite the NP-hardness of the problem, we can report all safe paths for 99.8% of the over 27,000 non-trivial graphs of this dataset in only 1.5 hours. Our results suggest that, on perfect data, there is less ambiguity than thought in the notoriously hard RNA assembly problem.

Availability: github.com/algbio/mfd-safety

Authors: Fernando H. C. Dias, Manuel Caceres, Lucia Williams, Brendan Mumey, Alexandru I. Tomescu

Many important problems in Bioinformatics (e.g., assembly or multi-assembly) admit multiple solutions, while the final objective is to report only one. A common approach to deal with this uncertainty is finding safe partial solutions (e.g., contigs) which are common to all solutions. Previous research on safety has focused on polynomially-time solvable problems, whereas many successful and natural models are NP-hard to solve, leaving a lack of "safety tools" for such problems. We propose the first method for computing all safe solutions for an NP-hard problem, minimum flow decomposition. We obtain our results by developing a "safety test" for paths based on a general Integer Linear Programming (ILP) formulation. Moreover, we provide implementations with practical optimizations aimed to reduce the total ILP time, the most efficient of these being based on a recursive group-testing procedure.

Results: Experimental results on the transcriptome datasets of Shao and Kingsford (TCBB, 2017) show that all safe paths for minimum flow decompositions correctly recover up to 90% of the full RNA transcripts, which is at least 25% more than previously known safe paths, such as (Caceres et al. TCBB, 2021), (Zheng et al., RECOMB 2021), (Khan et al., RECOMB 2022, ESA 2022). Moreover, despite the NP-hardness of the problem, we can report all safe paths for 99.8% of the over 27,000 non-trivial graphs of this dataset in only 1.5 hours. Our results suggest that, on perfect data, there is less ambiguity than thought in the notoriously hard RNA assembly problem.

Availability: https://github.com/algbio/mfd-safety

Breadth-First Depth-Next: Optimal Collaborative Exploration of Trees with Low Diameter

from arXiv: Data Structures and Algorithms

Authors: Romain Cosson, Laurent Massoulié, Laurent Viennot

We consider the problem of collaborative tree exploration posed by Fraigniaud, Gasieniec, Kowalski, and Pelc where a team of $k$ agents is tasked to collectively go through all the edges of an unknown tree as fast as possible. Denoting by $n$ the total number of nodes and by $D$ the tree depth, the $\mathcal{O}(n/\log(k)+D)$ algorithm of Fraigniaud et al. achieves the best-known competitive ratio with respect to the cost of offline exploration which is $\Theta(\max{\{2n/k,2D\}})$. Brass, Cabrera-Mora, Gasparri, and Xiao consider an alternative performance criterion, namely the additive overhead with respect to $2n/k$, and obtain a $2n/k+\mathcal{O}((D+k)^k)$ runtime guarantee. In this paper, we introduce `Breadth-First Depth-Next' (BFDN), a novel and simple algorithm that performs collaborative tree exploration in time $2n/k+\mathcal{O}(D^2\log(k))$, thus outperforming Brass et al. for all values of $(n,D)$ and being order-optimal for all trees with depth $D=o_k(\sqrt{n})$. Moreover, a recent result from Disser et al. implies that no exploration algorithm can achieve a $2n/k+\mathcal{O}(D^{2-\epsilon})$ runtime guarantee. The dependency in $D^2$ of our bound is in this sense optimal. The proof of our result crucially relies on the analysis of an associated two-player game. We extend the guarantees of BFDN to: scenarios with limited memory and communication, adversarial setups where robots can be blocked, and exploration of classes of non-tree graphs. Finally, we provide a recursive version of BFDN with a runtime of $\mathcal{O}_\ell(n/k^{1/\ell}+\log(k) D^{1+1/\ell})$ for parameter $\ell\ge 1$, thereby improving performance for trees with large depth.

Authors: Romain Cosson, Laurent Massoulié, Laurent Viennot

We consider the problem of collaborative tree exploration posed by Fraigniaud, Gasieniec, Kowalski, and Pelc where a team of $k$ agents is tasked to collectively go through all the edges of an unknown tree as fast as possible. Denoting by $n$ the total number of nodes and by $D$ the tree depth, the $\mathcal{O}(n/\log(k)+D)$ algorithm of Fraigniaud et al. achieves the best-known competitive ratio with respect to the cost of offline exploration which is $\Theta(\max{\{2n/k,2D\}})$. Brass, Cabrera-Mora, Gasparri, and Xiao consider an alternative performance criterion, namely the additive overhead with respect to $2n/k$, and obtain a $2n/k+\mathcal{O}((D+k)^k)$ runtime guarantee. In this paper, we introduce `Breadth-First Depth-Next' (BFDN), a novel and simple algorithm that performs collaborative tree exploration in time $2n/k+\mathcal{O}(D^2\log(k))$, thus outperforming Brass et al. for all values of $(n,D)$ and being order-optimal for all trees with depth $D=o_k(\sqrt{n})$. Moreover, a recent result from Disser et al. implies that no exploration algorithm can achieve a $2n/k+\mathcal{O}(D^{2-\epsilon})$ runtime guarantee. The dependency in $D^2$ of our bound is in this sense optimal. The proof of our result crucially relies on the analysis of an associated two-player game. We extend the guarantees of BFDN to: scenarios with limited memory and communication, adversarial setups where robots can be blocked, and exploration of classes of non-tree graphs. Finally, we provide a recursive version of BFDN with a runtime of $\mathcal{O}_\ell(n/k^{1/\ell}+\log(k) D^{1+1/\ell})$ for parameter $\ell\ge 1$, thereby improving performance for trees with large depth.

The Iteration Number of the Weisfeiler-Leman Algorithm

from arXiv: Data Structures and Algorithms

Authors: Martin Grohe, Moritz Lichter, Daniel Neuen

We prove new upper and lower bounds on the number of iterations the $k$-dimensional Weisfeiler-Leman algorithm ($k$-WL) requires until stabilization. For $k \geq 3$, we show that $k$-WL stabilizes after at most $O(kn^{k-1}\log n)$ iterations (where $n$ denotes the number of vertices of the input structures), obtaining the first improvement over the trivial upper bound of $n^{k}-1$ and extending a previous upper bound of $O(n \log n)$ for $k=2$ [Lichter et al., LICS 2019].

We complement our upper bounds by constructing $k$-ary relational structures on which $k$-WL requires at least $n^{\Omega(k)}$ iterations to stabilize. This improves over a previous lower bound of $n^{\Omega(k / \log k)}$ [Berkholz, Nordstr\"{o}m, LICS 2016].

We also investigate tradeoffs between the dimension and the iteration number of WL, and show that $d$-WL, where $d = \lceil\frac{3(k+1)}{2}\rceil$, can simulate the $k$-WL algorithm using only $O(k^2 \cdot n^{\lfloor k/2\rfloor + 1} \log n)$ many iterations, but still requires at least $n^{\Omega(k)}$ iterations for any $d$ (that is sufficiently smaller than $n$).

The number of iterations required by $k$-WL to distinguish two structures corresponds to the quantifier rank of a sentence distinguishing them in the $(k + 1)$-variable fragment $C_{k+1}$ of first-order logic with counting quantifiers. Hence, our results also imply new upper and lower bounds on the quantifier rank required in the logic $C_{k+1}$, as well as tradeoffs between variable number and quantifier rank.

Authors: Martin Grohe, Moritz Lichter, Daniel Neuen

We prove new upper and lower bounds on the number of iterations the $k$-dimensional Weisfeiler-Leman algorithm ($k$-WL) requires until stabilization. For $k \geq 3$, we show that $k$-WL stabilizes after at most $O(kn^{k-1}\log n)$ iterations (where $n$ denotes the number of vertices of the input structures), obtaining the first improvement over the trivial upper bound of $n^{k}-1$ and extending a previous upper bound of $O(n \log n)$ for $k=2$ [Lichter et al., LICS 2019].

We complement our upper bounds by constructing $k$-ary relational structures on which $k$-WL requires at least $n^{\Omega(k)}$ iterations to stabilize. This improves over a previous lower bound of $n^{\Omega(k / \log k)}$ [Berkholz, Nordstr\"{o}m, LICS 2016].

We also investigate tradeoffs between the dimension and the iteration number of WL, and show that $d$-WL, where $d = \lceil\frac{3(k+1)}{2}\rceil$, can simulate the $k$-WL algorithm using only $O(k^2 \cdot n^{\lfloor k/2\rfloor + 1} \log n)$ many iterations, but still requires at least $n^{\Omega(k)}$ iterations for any $d$ (that is sufficiently smaller than $n$).

The number of iterations required by $k$-WL to distinguish two structures corresponds to the quantifier rank of a sentence distinguishing them in the $(k + 1)$-variable fragment $C_{k+1}$ of first-order logic with counting quantifiers. Hence, our results also imply new upper and lower bounds on the quantifier rank required in the logic $C_{k+1}$, as well as tradeoffs between variable number and quantifier rank.