Theory of Computing ReportenTR24-042 | Pebble Games and Algebraic Proof Systems Meet Again |
Jacobo Toran,
Lisa Jaserhttps://eccc.weizmann.ac.il/report/2024/042
https://eccc.weizmann.ac.il/report/2024/042
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><p>Analyzing refutations of the well known
pebbling formulas we prove some new strong connections between pebble games and algebraic proof system, showing that
there is a parallelism between the reversible, black and black-white pebbling games on one side, and
the three algebraic proof systems NS, MC and PC on the other side. In particular we prove:
\begin{itemize}
\item For any DAG $G$ with a single sink, if there is a Monomial Calculus refutation
for $Peb(G)$ having simultaneously degree $s$ and size $t$
then there is a black pebbling strategy on $G$ with space $s$ and time $t+s$.
Also if there is a black pebbling strategy for $G$ with space $s$ and time $t$ it is possible to extract from it
a MC refutation
for $Peb(G)$ having simultaneously degree $s$ and size $2t(s-1)$.
These results are analogous to those proven in [de Rezende et al. 21] for the case of reversible pebbling and
Nullstellensatz.
Using them we prove degree separations between NS and MC
as well as strong degree-size tradeoffs for MC.
\item We show that the variable space needed for the refutation of pebbling formulas in Polynomial Calculus exactly
coincides with the black-white pebbling number of the corresponding graph.
One direction of this result was known. We present an new elementary proof of it.
\item
We show that for any unsatisfiable CNF formula $F,$ the variable space in a Resolution refutation
of the formula is a lower bound for the monomial space in a PCR refutation for
the extended formula $F[\oplus]$.
This implies that
for any DAG
$G$, the monomial space needed in the PCR refutation of an XOR pebbling formulas is lower bounded
by the black-white pebbling number of the corresponding graph. This solves affirmatively Open Problem 7.11 from
[Buss Nordström 21].
\item
The last result
also
proves a strong separation between degree and monomial space in PCR of size $\Omega(\frac{n}{\log n})$
with the additional property
that it is independent of the field characteristic. This question
was posed in [Filmus et al. 13].
\end{itemize}</p></body></html>
2024-03-03 07:43:10 UTCECCC PapersTwo Professorships (open rank) interfacing classical Network Science and Graph Machine Learning at Goethe University Frankfurt, Center for Critical Computational Studies (C3S) (apply by April 2, 2024)http://cstheory-jobs.org/2024/03/02/two-professorships-open-rank-interfacing-classical-network-science-and-graph-machine-learning-at-goethe-university-frankfurt-center-for-critical-computational-studies-c3s-apply-by-april-2-2024/
https://cstheory-jobs.org/2024/03/02/two-professorships-open-rank-interfacing-classical-network-science-and-graph-machine-learning-at-goethe-university-frankfurt-center-for-critical-computational-studies-c3s-apply-by-april-2-2024/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>Ideal candidates will possess a robust cross-disciplinary profile or interest, demonstrating not only expertise within their specific domains but also a genuine openness to collaborative, cross-disciplinary work. While the primary focus is on researchers with a strong background in network science, computer science or mathematics, other degrees in suitable application areas are also welcome.</p>
<p>Website: <a href="https://www.c3s-frankfurt.de/workshop">https://www.c3s-frankfurt.de/workshop</a><br>
Email: office@c3s.uni-frankfurt.de</p>
<p class="authors">By shacharlovett</p>
</body></html>
2024-03-02 21:17:53 UTCCCI: jobsPostdoc at Yale University (apply by March 15, 2024)http://cstheory-jobs.org/2024/03/02/postdoc-at-yale-university-apply-by-march-15-2024/
https://cstheory-jobs.org/2024/03/02/postdoc-at-yale-university-apply-by-march-15-2024/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>A postdoc position is available, focusing on emerging problems in the areas of the foundations of AI (specifically, extending diffusion models or understanding/evaluating LLMs) or in the area of responsible AI (specifically, fairness and privacy).</p>
<p>Submit CV, research statement, and 3 letters by March 15, 2024.</p>
<p>Strong math background, with experience in modeling and empirical work required.</p>
<p>Website: <a href="https://www.cs.yale.edu/homes/vishnoi/Home.html">https://www.cs.yale.edu/homes/vishnoi/Home.html</a><br>
Email: nisheeth.vishnoi@gmail.com</p>
<p class="authors">By shacharlovett</p>
</body></html>
2024-03-02 15:28:58 UTCCCI: jobsSum(m)it280 – Frank, Füredi, Győri, and Pach are 70http://cstheory-events.org/?p=627
https://cstheory-events.org/2024/03/02/summit280-frank-furedi-gyori-and-pach-are-70/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>July 8-12, 2024Budapest, Hungaryhttps://conferences.renyi.hu/summit280/home Submission deadline: March 31, 2024Registration deadline: May 15, 2024 In 2024, Péter Frankl, Zoltán Füredi, Ervin Győri and János Pach will turn 70. On the occasion of this joyful event, we organize a conference Sum(m)it280. We would like to invite you to celebrate these four Hungarian combinatorialists with us. List of … <a href="https://cstheory-events.org/2024/03/02/summit280-frank-furedi-gyori-and-pach-are-70/" class="more-link">Continue reading <span class="screen-reader-text">Sum(m)it280 – Frank, Füredi, Győri, and Pach are 70</span></a></p>
<p class="authors">By shacharlovett</p>
</body></html>
2024-03-02 04:22:16 UTCCS Theory EventsTR24-041 | Launching Identity Testing into (Bounded) Space |
Nikhil Gupta,
Pranav Bisht,
Prajakta Nimbhorkar,
Ilya Volkovichhttps://eccc.weizmann.ac.il/report/2024/041
https://eccc.weizmann.ac.il/report/2024/041
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><p>In this work, we initiate the study of the space complexity of the Polynomial Identity Testing problem (PIT).
First, we observe that the majority of the existing (time-)efficient ``blackbox'' PIT algorithms already give rise to space-efficient ``whitebox'' algorithms for the respective classes of arithmetic formulas via a space-efficient arithmetic formula evaluation procedure. Among other things, we observe that the results of Minahan-Volkovich (ACM Transactions on Computation Theory, 2018), Gurjar et. al. (Theory of Computing, 2017) and Agrawal et. al. (SIAM Journal of Computing, 2016) imply logspace PIT algorithms for read-once formulas, constant-width read-once oblivious branching programs, and bounded-transcendence degree depth-3 circuits, respectively.
However, since the best known blackbox PIT algorithms for the class of multilinear read-$k$ formulas are quasi-polynomial time, as shown in Anderson et. al. (Computational Complexity, 2015), our previous observation only yields a $O(\log^2n)$-space whitebox PIT algorithm. Our main result, thus, is the first $O(\log n)$-space PIT algorithm for multilinear read-twice formulas. We also extend this result to test if a given read-twice formula is equal to a given read-once formula.
Our technical contributions include the development of a space-efficient measure $\muell$ which is ``distilled'' from the result of Anderson et. al. (Computational Complexity, 2015) and can be used to reduce PIT for a read-$k$ formula to PIT for a sum of two read-$(k-1)$ formulas, in logarithmic space.
In addition, we show how to combine a space-efficient blackbox PIT algorithm for read-$(k-1)$ formulas together with a space-efficient whitebox PIT algorithm for read-$k$ formulas to test if a given read-$k$ formula is equal to a given read-$(k-1)$ formula.</p></body></html>
2024-03-01 21:17:45 UTCECCC PapersDominic Welsh, 1938–2023https://rjlipton.wpcomstaging.com/?p=22801
https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>
<font color="#0044cc"><br>
<em>My doctoral thesis advisor</em><br>
<font color="#000000"></font></font></p>
<table class="image alignright">
<tbody>
<tr>
<td>
<a href="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/dominicbybridget/" rel="attachment wp-att-22803"><img decoding="async" data-attachment-id="22803" data-permalink="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/dominicbybridget/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?fit=1227%2C1205&ssl=1" data-orig-size="1227,1205" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="DominicByBridget" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?fit=300%2C295&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?fit=600%2C589&ssl=1" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?resize=175%2C172&ssl=1" alt="" width="175" height="172" class="alignright size-full wp-image-22803" srcset="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?w=1227&ssl=1 1227w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?resize=300%2C295&ssl=1 300w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?resize=1024%2C1006&ssl=1 1024w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?resize=768%2C754&ssl=1 768w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/DominicByBridget.png?resize=1200%2C1178&ssl=1 1200w" sizes="(max-width: 175px) 100vw, 175px" data-recalc-dims="1"></a>
</td>
</tr>
<tr>
<td class="caption alignright"><font size="-2">Geoffrey Grimmett <a href="https://www.statslab.cam.ac.uk/~grg/books/ccc.html">source</a> </font></td>
</tr>
</tbody>
</table>
<p>
Dominic Welsh passed away last November 30th. He was my doctoral advisor 1981–86 at Merton College, Oxford University, and a giant in several fields of combinatorial and applied mathematics. </p>
<p>
Today I remember Dominic and describe his late-career influence on a modern problem: How “natural” is bounded-error quantum polynomial time?</p>
<p>
Among <a href="https://professordominicwelsh.muchloved.com/">several</a> <a href="https://www.merton.ox.ac.uk/news/professor-dominic-welsh-1938-2023">memorials</a> by <a href="https://cameroncounts.wordpress.com/2023/12/04/dominic-welsh/">colleagues</a> and <a href="https://uwaterloo.ca/math/news/remembering-dominic-welsh">friends</a>, there is now a <a href="http://matroidunion.org/?p=5304">detailed tribute</a> on the <em>Matroid Union</em> blog by my fellow student and Oxford officemate Graham Farr with fellow <a href="https://www.genealogy.math.ndsu.nodak.edu/id.php?id=53162&fChrono=1">students</a> Dillon Mayhew and James Oxley. Graham lays out the full variety of Dominic’s career, beginning with his work on discrete probability and then going on to matroid theory—work that led him to write the <a href="https://books.google.com/books/about/Matroid_Theory.html">definitive text</a> <i>Matroid Theory</i> already in 1976.</p>
<p>
I first met Dominic on a visit to Oxford in August 1980, on my way back from playing chess in Norway. This was a month before the start of my senior year at Princeton and decision to apply for a Marshall Scholarship. I was beginning an undergraduate thesis on algebraic combinatorics supervised by Doug West and Harold Kuhn. Although I had attended some talks on computational complexity, including one by Eugene Luks on isomorphism for graphs of bounded degree being in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex">, I had not thought of doing complexity as a research area of itself until I arrived to Merton College in September 1981. Dominic was so enthusiastic about complexity that I never heard much about matroid theory from him; ironically I have come to that subject <a href="https://rjlipton.wpcomstaging.com/2019/08/26/a-matroid-quantum-connection/">only</a> <a href="https://rjlipton.wpcomstaging.com/2020/02/01/subliminal-graph-duals/">fairly</a> <a href="https://rjlipton.wpcomstaging.com/2020/02/11/using-negative-nodes-to-count/">recently</a>.</p>
<p>
</p>
<p></p>
<h2> Computational Complexity and Physical Systems </h2>
<p></p>
<p>
Dominic’s view on complexity came from statistical physics, a subject proceeding from his own doctoral <a href="https://ora.ox.ac.uk/objects/uuid:4787cf63-9e81-4d9c-a49b-d7ccae6286f3">thesis</a> in 1964 under John Hammersley. He was most interested in counting problems. Leslie Valiant had recently introduced the complexity class <a href="https://en.wikipedia.org/wiki/#P"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex"></a> and <a href="https://en.wikipedia.org/wiki/#P-completeness_of_01-permanent">shown</a> that computing the permanent of a matrix is <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-complete, hence <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BNP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{NP}}" class="latex">-hard. The problems of counting satisfying assignments, graph 3-colorings, Hamiltonian cycles, and perfect matchings in bipartite graphs are also <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-complete.</p>
<p>
The first three are counting versions of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BNP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{NP}}" class="latex">-complete decision problems, but the decision version of the last—does a graph have a perfect matching?—belongs to <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex">. The counting problem <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%232SAT%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#2SAT}}" class="latex"> for clause size <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bk%3D2%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{k=2}" class="latex"> is <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-complete, even though <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B2SAT%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{2SAT}}" class="latex"> is in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex">.</p>
<p>
What fascinated Dominic even more was that instances of the hard problems can often be structured with a natural parameter <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p}" class="latex"> so that the complexity transitions from easy to hard in a narrow <em>critical region</em> as <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p}" class="latex"> varies. The classic example is the ratio of clauses to variables in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bk%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{k}" class="latex">-SAT instances and other forms of constraint satisfaction. Here is a diagram for 3SAT from a 2013 <a href="https://www.mdpi.com/2078-2489/4/1/60">paper</a> by Marcelo Finger and Poliana Reis, showing both the probability of being satisfiable and the time taken by an exhaustive algorithm:</p>
<p>
<a href="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/3satphase/" rel="attachment wp-att-22804"><img fetchpriority="high" decoding="async" data-attachment-id="22804" data-permalink="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/3satphase/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?fit=2409%2C1139&ssl=1" data-orig-size="2409,1139" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="3SATphase" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?fit=300%2C142&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?fit=600%2C284&ssl=1" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=360%2C170&ssl=1" alt="" width="360" height="170" class="aligncenter wp-image-22804" srcset="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?w=2409&ssl=1 2409w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=300%2C142&ssl=1 300w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=1024%2C484&ssl=1 1024w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=768%2C363&ssl=1 768w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=1536%2C726&ssl=1 1536w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=2048%2C968&ssl=1 2048w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?resize=1200%2C567&ssl=1 1200w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/3SATphase.png?w=1800&ssl=1 1800w" sizes="(max-width: 360px) 100vw, 360px" data-recalc-dims="1"></a></p>
<p>
For graph coloring and Hamiltonian cycles one can use parameters that govern the distribution for drawing random graph instances. The meta-question here is:</p>
<blockquote><p><b> </b> <em> If in fact <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%3DNP%7D%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\mathsf{P=NP}}" class="latex">, then what accounts for this observed phenomenon of sharp phase transition in evident hardness? </em>
</p></blockquote>
<p></p>
<p>
See this 2012 <a href="https://sites.santafe.edu/~moore/turing-talk.pdf">talk</a> by Cris Moore with hints on how further phase phenomena could possibly unlock <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP+%5Cneq+NP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P \neq NP}}" class="latex">. Stepping back to the early 1980s, all this raised early hope that the theory of physical systems, such as the <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a>, could help surmount barriers to complexity lower bounds that were already being felt. </p>
<p>
</p>
<p></p>
<h2> Percolation and a Story </h2>
<p></p>
<p>
The world has just been through a long engagement with a phase-transition parameter, the <a href="https://en.wikipedia.org/wiki/Basic_reproduction_number">reproduction number</a> <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BR_0%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{R_0}" class="latex"> of an epidemic. It is normalized so that if <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BR_0+%3C+1%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{R_0 < 1}" class="latex"> then the infection will be contained in ideal circumstances, but if <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BR_0+%3E+1%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{R_0 > 1}" class="latex"> it will almost certainly pervade the population. </p>
<p>
Dominic introduced me to related examples in <a href="https://en.wikipedia.org/wiki/Percolation_theory">percolation theory</a> that were not causing actual disasters. Open problems emerge almost immediately. Consider <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bn%5Ctimes+n%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{n\times n}" class="latex"> square lattices in which every node is independently colored black with probability <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p}" class="latex">. There is a critical value <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp_c%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p_c}" class="latex"> such that when <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp+%3C+p_c%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p < p_c}" class="latex">, the probability of having a path of black nodes span the lattice goes to <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B0%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{0}" class="latex"> sharply as <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bn+%5Crightarrow+%5Cinfty%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{n \rightarrow \infty}" class="latex">, but for <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp+%3E+p_c%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p > p_c}" class="latex"> the probability of a spanning black cluster is nonzero and ramps up quickly. This value has been empirically computed as <a href="https://arxiv.org/abs/cond-mat/0005264"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B0.59274621+%5Cpm+0.00000013%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{0.59274621 \pm 0.00000013}" class="latex"></a> but is not known analytically.</p>
<p></p>
<p></p>
<table style="margin:auto;">
<tr>
<td>
<a href="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/sitepercolation/" rel="attachment wp-att-22805"><img decoding="async" data-attachment-id="22805" data-permalink="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/sitepercolation/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/SitePercolation.png?fit=720%2C261&ssl=1" data-orig-size="720,261" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="SitePercolation" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/SitePercolation.png?fit=300%2C109&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/SitePercolation.png?fit=600%2C218&ssl=1" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/SitePercolation.png?resize=480%2C178&ssl=1" alt="" width="480" height="178" class="aligncenter wp-image-22805" data-recalc-dims="1"></a>
</td>
</tr>
<tr>
<td class="caption alignright">
<font size="-2">Paul Heitjans <a href="https://www.researchgate.net/figure/Site-percolation-on-the-square-lattice-The-small-circles-represent-the-occupied-sites_fig1_228337483">source</a></font>
</td>
</tr>
</table>
<p>
I was duly warned that analytic bounds on <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp_c%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p_c}" class="latex"> for other families of grids were as hard to do as complexity lower bounds. Dominic and Paul Seymour had made important <a href="https://cse.buffalo.edu/~regan/DJAW/SeymourWelsh1978.pdf">progress</a> on the problem for <em>edges</em> in the square lattice, which had helped Harry Kesten <a href="https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-74/issue-1/The-critical-probability-of-bond-percolation-on-the-square-lattice/cmp/1103907931.full">prove</a> <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bp_c+%3D+0.5%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{p_c = 0.5}" class="latex"> for them in 1980. (See also this 2022 <a href="https://arxiv.org/pdf/2204.01517.pdf">paper</a>.) Thresholds for many other lattice structures have been <a href="https://en.wikipedia.org/wiki/Percolation_threshold">computed empirically</a> but remain open analytically. </p>
<p>
My strongest memory of percolation in my first year had a different purpose. Much work was being done by physicists whose level of rigor was below standard. Dominic set me to probe one such paper, and I confirmed his suspicion of a major gap in the main result. Rather than send a letter, he invited the author over to Oxford, and after suitable entertainment at Fellows Lunch, brought him to meet me at the Mathematical Institute. Having prepped the approach with me beforehand, and with his usual twinkle in the eyes, he invited the author to lay out his proof. I posed questions that led to the flaw, and the poor chap was duly flustered—but appreciative at the same time. At least he was treated in best fashion.</p>
<p>
</p>
<p></p>
<h2> Complexity </h2>
<p></p>
<p>
I took the mindset, however, that hope of progress in complexity needed not only greater rigor but a full embrace of logic and formal structures. I was primed to be caught up in the <a href="https://en.wikipedia.org/wiki/Structural_complexity_theory">Structural Complexity</a> movement of the 1980s, which enshrined the field’s main conference until it was <a href="https://computationalcomplexity.org/general.php">renamed</a> the “Computational Complexity Conference” in 1996. Among papers I studied on the possible independence of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex"> versus <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BNP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{NP}}" class="latex"> from strong formal systems was <a href="https://dl.acm.org/doi/abs/10.1145/800141.804652">this</a> by Dick with Rich DeMillo. </p>
<p>
This went outside Dominic’s interests, but his steady and kindly hand was important especially through a tumultuous fourth year for me. During that fourth year, we co-taught a course on complexity, communication, and coding theory, which figured into his 1988 <a href="https://www.amazon.com/Codes-Cryptography-Dominic-Welsh/dp/0198532873">textbook</a> <em>Codes and Cryptography</em>. Here is the end of its preface:</p>
<p></p>
<p><br>
<a href="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/welshccack/" rel="attachment wp-att-22806"><img loading="lazy" decoding="async" data-attachment-id="22806" data-permalink="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/welshccack/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?fit=690%2C508&ssl=1" data-orig-size="690,508" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="WelshCCack" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?fit=300%2C221&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?fit=600%2C442&ssl=1" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?resize=460%2C340&ssl=1" alt="" width="460" height="340" class="aligncenter wp-image-22806" srcset="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?w=690&ssl=1 690w, https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/WelshCCack.png?resize=300%2C221&ssl=1 300w" sizes="(max-width: 460px) 100vw, 460px" data-recalc-dims="1"></a></p>
<p></p>
<p><br>
As for doing something about “hieroglyphic handwriting,” I brought in the inexpensive PC-based typesetting system <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BT%5E3%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{T^3}" class="latex"> (now <a href="https://www.sciword.co.uk/">Scientific Word</a>) with the generous support of the Mathematical Institute, where it was housed in room T-3. I manually <a href="https://rjlipton.wpcomstaging.com/2011/03/09/tex-is-great-what-is-tex/#comment-11172">improved</a> all the <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B24+%5Ctimes+18%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{24 \times 18}" class="latex"> pixel math-and-language fonts, which had evidently been digitized rather than crafted, so I joked that I had “written” a dozen dissertations before I finished mine the next year.</p>
<p></p>
<p></p>
<table style="margin:auto;">
<tr>
<td>
<a href="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/kwrdoctorate/" rel="attachment wp-att-22808"><img loading="lazy" decoding="async" data-attachment-id="22808" data-permalink="https://rjlipton.wpcomstaging.com/2024/03/01/dominic-welsh-1938-2023/kwrdoctorate/" data-orig-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/KWRdoctorate.png?fit=524%2C743&ssl=1" data-orig-size="524,743" data-comments-opened="1" data-image-meta='{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}' data-image-title="KWRdoctorate" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/KWRdoctorate.png?fit=212%2C300&ssl=1" data-large-file="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/KWRdoctorate.png?fit=524%2C743&ssl=1" src="https://i0.wp.com/rjlipton.wpcomstaging.com/wp-content/uploads/2024/03/KWRdoctorate.png?resize=250%2C360&ssl=1" alt="" width="250" height="360" class="aligncenter wp-image-22808" data-recalc-dims="1"></a>
</td>
</tr>
<tr>
<td class="caption alignright">
<font size="-2">My camera, Merton College, Oxford, October 1986</font>
</td>
</tr>
</table>
<p>
</p>
<p></p>
<h2> Capturing BQP </h2>
<p></p>
<p>
Between then and Dominic’s 2005 retirement, he branched into new strands of complexity involving graph and knot polynomials, which grew into and beyond his 1993 <a href="https://www.abebooks.com/9780521457408/Complexity-Knots-Colourings-Countings-London-0521457408/plp">book</a> <em>Complexity: Knots, Colorings, and Countings</em>. (Although this is by Cambridge University Press, I have added an Oxford comma to the title.) This fostered a connection to quantum computing via the <a href="https://en.wikipedia.org/wiki/Jones_polynomial">Jones polynomial</a> of knot theory. The basic relation was discovered by Michael Freedman, Alexei Kitaev, Michael Larson, and Zhenghan Wang in their foundational 2003 paper <a href="https://arxiv.org/abs/quant-ph/0101025">“Topological Quantum Computing”</a>:</p>
<blockquote>
<p><b>Theorem 1 (paraphrase)</b> <em> In time polynomial in the size <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bs%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{s}" class="latex"> of a given quantum circuit <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BC%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{C}" class="latex">, an argument <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bx%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{x}" class="latex"> to <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BC%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{C}" class="latex">, and <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Clog%281%2F%5Cepsilon%29%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\log(1/\epsilon)}" class="latex">, we can create a <a href="https://en.wikipedia.org/wiki/Link_(knot_theory)">knot link</a> <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{L}" class="latex"> and a simple formula <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bg_L%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{g_L}" class="latex"> such that </em></p>
<p align="center"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle++%5Cleft%7C%5CPr%5BC%28x%29%3D1%5D+-+g_L%28V_L%28e%5E%7B2%5Cpi+i%2F5%7D%29%29%5Cright%7C+%3C+%5Cepsilon%2C+&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="\displaystyle \left|\Pr[C(x)=1] - g_L(V_L(e^{2\pi i/5}))\right| < \epsilon, " class="latex"></p>
<p>where <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BV_L%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{V_L}" class="latex"> is the Jones polynomial of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{L}" class="latex">.
</p>
</blockquote>
<p></p>
<p>
In contrast to polynomial translations I’ve discussed <a href="https://rjlipton.wpcomstaging.com/2012/07/08/grilling-quantum-circuits/">here</a> and <a href="https://rjlipton.wpcomstaging.com/2017/11/20/a-magic-madison-visit/">here</a>, there is only <b>one</b> evaluation of the Jones polynomial—and at a “magic” fifth root of unity. </p>
<p>
Now Dominic—with his student Dirk Vertigan and with François Jaeger of Grenoble—<a href="https://gwern.net/doc/math/1990-jaeger.pdf">showed</a> that evaluating <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BV_L%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{V_L}" class="latex"> at any point other than a <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B2%2C3%2C4%2C6%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{2,3,4,6}" class="latex">th root of unity is <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-hard given general <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{L}" class="latex">. Many other problems that express the evaluation of quantum circuits are likewise <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-hard, including some of the special cases shown <a href="https://arxiv.org/pdf/1005.1407.pdf">here</a>. This raises a natural meta-question:</p>
<blockquote><p><b> </b> <em> Is it possible to simulate <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex"> via a natural computational problem that is not (known to be) <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-hard on its full natural domain of instances? </em>
</p></blockquote>
<p></p>
<p>
There is a case to be made for a categorical <em>no</em> answer. The <em>dichotomy</em> phenomenon is that whole classes <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BC%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{C}" class="latex"> of natural functions <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f}" class="latex"> in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex"> have the property that every <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf+%5Cin+C%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f \in C}" class="latex"> is either in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex"> or is <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">-complete. Now <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex"> is believed to be intermediate between <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{P}}" class="latex"> and <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BPP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{PP}}" class="latex">, the latter being the language-class peer of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7B%5C%23P%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{\#P}}" class="latex">. But <em>dichotomy</em> seems to leave no room for a natural capture of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex"> that stays in-between. This is besides the observation from structural complexity theory that insofar as <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex"> is a “promise” class, it is unlikely to have a complete <em>decision problem</em>, nor <em>ipso facto</em> be captured by simple function computations.</p>
<p>
</p>
<p></p>
<h2> An Answer </h2>
<p></p>
<p>
Dominic’s 2005 <a href="https://arxiv.org/abs/0908.2122">paper</a> with Freedman, Laci Lovász, and Dominic’s student Magnus Bordewich gave—among several results—a task that exactly captures <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex">. This required a clever new condition of <em>approximation</em> for numerical functions <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f}" class="latex">. The most familiar ones require computing (deterministically or with high probability) a value <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7By%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{y}" class="latex"> such that </p>
<p align="center"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle++f%28x%29+-+%5Cepsilon+f%28x%29+%5Cleq+y+%5Cleq+f%28x%29+%2B+%5Cepsilon+f%28x%29%2C+&bg=ffffff&fg=000000&s=0&c=20201002" alt="\displaystyle f(x) - \epsilon f(x) \leq y \leq f(x) + \epsilon f(x), " class="latex"></p>
<p>where for every prescribed <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cepsilon+%3E+0%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\epsilon > 0}" class="latex">, the running time is polynomial in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%7Cx%7C%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{|x|}" class="latex">. If the time is also polynomial in <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cfrac%7B1%7D%7B%5Cepsilon%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\frac{1}{\epsilon}}" class="latex">, one speaks of a <em>fully</em> approximating scheme (<a href="https://people.inf.ethz.ch/gmohsen/AA19/Notes/S5.pdf">FPRAS</a> in the random case). </p>
<p>
A rub with this is that the most familiar numerical approaches to simulating quantum circuits <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BC%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{C}" class="latex"> make the acceptance probability (or variously, the amplitude) have the form </p>
<p align="center"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle++%5CPr%5BC%28x%29%3D1%5D+%3D+%5Cfrac%7Bf_1%28x%29+-+f_2%28x%29%7D%7BR%7D%2C+&bg=ffffff&fg=000000&s=0&c=20201002" alt="\displaystyle \Pr[C(x)=1] = \frac{f_1(x) - f_2(x)}{R}, " class="latex"></p>
<p>where <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_1%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_1}" class="latex"> and <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_2%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_2}" class="latex"> are fully approximable but the difference <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_1%28x%29+-+f_2%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_1(x) - f_2(x)}" class="latex"> is small, typically of order <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%28f_1%28x%29+%2B+f_2%28x%29%29%5E%7B1%2F2%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{(f_1(x) + f_2(x))^{1/2}}" class="latex">. This “multiplicative” approximation property does not carry through to the difference.</p>
<p>
Their <em>additive</em> approximation scheme takes an auxiliary function <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bu%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{u(x)}" class="latex"> and seeks to compute <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7By%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{y}" class="latex"> such that <a name="additive"></a></p>
<p align="center"><img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Cdisplaystyle++f%28x%29+-+%5Cepsilon+u%28x%29+%5Cleq+y+%5Cleq+f%28x%29+%2B+%5Cepsilon+u%28x%29.+%5C+%5C+%5C+%5C+%5C+%281%29&bg=ffffff&fg=000000&s=0&c=20201002" alt="\displaystyle f(x) - \epsilon u(x) \leq y \leq f(x) + \epsilon u(x). \ \ \ \ \ (1)" class="latex"></p>
<p> Now if <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_1%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_1(x)}" class="latex"> and <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_2%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_2(x)}" class="latex"> are approximable with “normalizer” <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bu%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{u(x)}" class="latex"> in this sense, then so is <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bf_1%28x%29+-+f_2%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{f_1(x) - f_2(x)}" class="latex"> with normalizer <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B2u%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{2u(x)}" class="latex">. This works even if the difference is relatively tiny as above, provided <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bu%28x%29%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{u(x)}" class="latex"> is chosen appropriately. It still took more cleverness and an appeal to knot theory to make the idea work for <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex">:</p>
<blockquote><p><b>Theorem 2 (paraphrase)</b> <em> Let <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BA%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{A}" class="latex"> be an oracle function that takes as input <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cepsilon%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\epsilon}" class="latex"> and a knot link <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{L}" class="latex">, where <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{L}" class="latex"> is given as the <em>plat closure</em> of a <a href="https://encyclopediaofmath.org/wiki/Braid_theory">braid</a> of string size <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bm%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{m}" class="latex">, and returns an additive approximation of <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BV_L%28e%5E%7B2%5Cpi+i%2F5%7D%29%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{V_L(e^{2\pi i/5})}" class="latex"> within <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cepsilon%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\epsilon}" class="latex"> times the normalizer <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%282%5Ccos%5Cfrac%7B%5Cpi%7D%7B5%7D%29%5E%7Bm%2F2%7D%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{(2\cos\frac{\pi}{5})^{m/2}}" class="latex">. Then <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP+%3D+P%5EA%7D%7D&bg=e8e8e8&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP = P^A}}" class="latex">. </em>
</p></blockquote>
<p></p>
<p>
Here <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7BL%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{L}" class="latex"> is the “<img decoding="async" src="https://s0.wp.com/latex.php?latex=%7Bx%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{x}" class="latex">” in the definition of additive approximation. In words, the task of generating such approximations of the Jones polynomial at a fifth root of unity is equivalent in power to <img decoding="async" src="https://s0.wp.com/latex.php?latex=%7B%5Cmathsf%7BBQP%7D%7D&bg=ffffff&fg=000000&s=0&c=20201002" alt="{\mathsf{BQP}}" class="latex">, allowing only deterministic polynomial time computation otherwise. As the <a href="http://matroidunion.org/?p=5304">tribute</a> co-written by Graham Farr puts it,</p>
<blockquote><p><b> </b> <em> Dominic collaborated with Bordewich, Freedman and Lovász on an important paper (2005) showing that an additive approximation (which is weaker than an FPRAS) to a certain Tutte polynomial evaluation (related to the Jones polynomial) is sufficient to <b>capture</b> the power of quantum computation. (emphasis added) </em>
</p></blockquote>
<p>
Dominic had much more engagement with quantum computing, including his co-supervision with Artur Ekert of Michele Mosca, who went on to the University of Waterloo and was one of several who welcomed Dominic there for an <a href="https://uwaterloo.ca/combinatorics-and-optimization/news/dominic-welsh-awarded-honorary-dmath-degree-and-addressed">honorary doctorate</a> in 2006.</p>
<p>
</p>
<p></p>
<h2> Open Problems </h2>
<p></p>
<p>
The <em>Matroid Union</em> tribute mentions numerous conjectures arising from Dominic’s work: proved, disproved, and still open. One of the proved ones is featured in the <a href="https://www.mathunion.org/fileadmin/IMU/Prizes/Fields/2022/IMU_Fields22_Huh_citation.pdf">citation</a> for the 2022 Fields Medal awarded to June Huh of Princeton University. I’ve tried to lay out motivations for the open ones having important ramifications——the tribute gives links by which to pursue them.</p>
<p>
Our condolences to Bridget, their family, and all who were graced to know Dominic over the years. A <a href="https://www.merton.ox.ac.uk/event/memorial-service-professor-dominic-welsh">memorial service</a> and tea reception afterward will be held at Merton College on June 1, 3:00pm UK time (with livestream).</p>
<p></p>
<p><br>
[added Mathematics Genealogy link to Dominic’s students at top; fixed that Farr’s first section is on discrete probability not matroid theory]</p>
<p class="authors">By KWRegan</p>
</body></html>
2024-03-01 21:13:22 UTCRichard LiptonOn Efficient Computation of DiRe Committeeshttp://arxiv.org/abs/2402.19365v1
http://arxiv.org/abs/2402.19365v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Kunal+Relia">Kunal Relia</a></p>Consider a committee election consisting of (i) a set of candidates who are
divided into arbitrary groups each of size \emph{at most} two and a diversity
constraint that stipulates the selection of \emph{at least} one candidate from
each group and (ii) a set of voters who are divided into arbitrary populations
each approving \emph{at most} two candidates and a representation constraint
that stipulates the selection of \emph{at least} one candidate from each
population who has a non-null set of approved candidates.
The DiRe (Diverse + Representative) committee feasibility problem (a.k.a. the
minimum vertex cover problem on unweighted undirected graphs) concerns the
determination of the smallest size committee that satisfies the given
constraints. Here, for this problem, we discover an unconditional deterministic
polynomial-time algorithm that is an amalgamation of maximum matching,
breadth-first search, maximal matching, and local minimization.</body></html>
2024-03-01 01:00:00 UTCarXiv: Computational ComplexityThe Power of Unentangled Quantum Proofs with Non-negative Amplitudeshttp://arxiv.org/abs/2402.18790v1
http://arxiv.org/abs/2402.18790v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Fernando+Granha+Jeronimo">Fernando Granha Jeronimo</a>, <a href="https://dblp.uni-trier.de/search?q=Pei+Wu">Pei Wu</a></p>Quantum entanglement is a fundamental property of quantum mechanics and plays
a crucial role in quantum computation and information. We study entanglement
via the lens of computational complexity by considering quantum generalizations
of the class NP with multiple unentangled quantum proofs, the so-called QMA(2)
and its variants. The complexity of QMA(2) is a longstanding open problem, and
only the trivial bounds QMA $\subseteq$ QMA(2) $\subseteq$ NEXP are known.
In this work, we study the power of unentangled quantum proofs with
non-negative amplitudes, a class which we denote $\text{QMA}^+(2)$. In this
setting, we are able to design proof verification protocols for problems both
using logarithmic size quantum proofs and having a constant probability gap in
distinguishing yes from no instances. In particular, we design global protocols
for small set expansion, unique games, and PCP verification. As a consequence,
we obtain NP $\subseteq \text{QMA}^+_{\log}(2)$ with a constant gap. By virtue
of the new constant gap, we are able to ``scale up'' this result to
$\text{QMA}^+(2)$, obtaining the full characterization $\text{QMA}^+(2)$=NEXP
by establishing stronger explicitness properties of the PCP for NEXP.
One key novelty of these protocols is the manipulation of quantum proofs in a
global and coherent way yielding constant gaps. Previous protocols (only
available for general amplitudes) are either local having vanishingly small
gaps or treat the quantum proofs as classical probability distributions
requiring polynomially many proofs thereby not implying non-trivial bounds on
QMA(2).
Finally, we show that QMA(2) is equal to $\text{QMA}^+(2)$ provided the gap
of the latter is a sufficiently large constant. In particular, if
$\text{QMA}^+(2)$ admits gap amplification, then QMA(2)=NEXP.</body></html>
2024-03-01 01:00:00 UTCarXiv: Computational ComplexitySpectral Meets Spatial: Harmonising 3D Shape Matching and Interpolationhttp://arxiv.org/abs/2402.18920v1
http://arxiv.org/abs/2402.18920v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Dongliang+Cao">Dongliang Cao</a>, <a href="https://dblp.uni-trier.de/search?q=Marvin+Eisenberger">Marvin Eisenberger</a>, <a href="https://dblp.uni-trier.de/search?q=Nafie+El+Amrani">Nafie El Amrani</a>, <a href="https://dblp.uni-trier.de/search?q=Daniel+Cremers">Daniel Cremers</a>, <a href="https://dblp.uni-trier.de/search?q=Florian+Bernard">Florian Bernard</a></p>Although 3D shape matching and interpolation are highly interrelated, they
are often studied separately and applied sequentially to relate different 3D
shapes, thus resulting in sub-optimal performance. In this work we present a
unified framework to predict both point-wise correspondences and shape
interpolation between 3D shapes. To this end, we combine the deep functional
map framework with classical surface deformation models to map shapes in both
spectral and spatial domains. On the one hand, by incorporating spatial maps,
our method obtains more accurate and smooth point-wise correspondences compared
to previous functional map methods for shape matching. On the other hand, by
introducing spectral maps, our method gets rid of commonly used but
computationally expensive geodesic distance constraints that are only valid for
near-isometric shape deformations. Furthermore, we propose a novel test-time
adaptation scheme to capture both pose-dominant and shape-dominant
deformations. Using different challenging datasets, we demonstrate that our
method outperforms previous state-of-the-art methods for both shape matching
and interpolation, even compared to supervised approaches.</body></html>
2024-03-01 01:00:00 UTCarXiv: Computational GeometryStatistical Estimation in the Spiked Tensor Model via the Quantum
Approximate Optimization Algorithmhttp://arxiv.org/abs/2402.19456v1
http://arxiv.org/abs/2402.19456v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Leo+Zhou">Leo Zhou</a>, <a href="https://dblp.uni-trier.de/search?q=Joao+Basso">Joao Basso</a>, <a href="https://dblp.uni-trier.de/search?q=Song+Mei">Song Mei</a></p>The quantum approximate optimization algorithm (QAOA) is a general-purpose
algorithm for combinatorial optimization. In this paper, we analyze the
performance of the QAOA on a statistical estimation problem, namely, the spiked
tensor model, which exhibits a statistical-computational gap classically. We
prove that the weak recovery threshold of $1$-step QAOA matches that of
$1$-step tensor power iteration. Additional heuristic calculations suggest that
the weak recovery threshold of $p$-step QAOA matches that of $p$-step tensor
power iteration when $p$ is a fixed constant. This further implies that
multi-step QAOA with tensor unfolding could achieve, but not surpass, the
classical computation threshold $\Theta(n^{(q-2)/4})$ for spiked $q$-tensors.
Meanwhile, we characterize the asymptotic overlap distribution for $p$-step
QAOA, finding an intriguing sine-Gaussian law verified through simulations. For
some $p$ and $q$, the QAOA attains an overlap that is larger by a constant
factor than the tensor power iteration overlap. Of independent interest, our
proof techniques employ the Fourier transform to handle difficult combinatorial
sums, a novel approach differing from prior QAOA analyses on spin-glass models
without planted structure.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsHigher-Order Networks Representation and Learning: A Surveyhttp://arxiv.org/abs/2402.19414v1
http://arxiv.org/abs/2402.19414v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Hao+Tian">Hao Tian</a>, <a href="https://dblp.uni-trier.de/search?q=Reza+Zafarani">Reza Zafarani</a></p>Network data has become widespread, larger, and more complex over the years.
Traditional network data is dyadic, capturing the relations among pairs of
entities. With the need to model interactions among more than two entities,
significant research has focused on higher-order networks and ways to
represent, analyze, and learn from them. There are two main directions to
studying higher-order networks. One direction has focused on capturing
higher-order patterns in traditional (dyadic) graphs by changing the basic unit
of study from nodes to small frequently observed subgraphs, called motifs. As
most existing network data comes in the form of pairwise dyadic relationships,
studying higher-order structures within such graphs may uncover new insights.
The second direction aims to directly model higher-order interactions using new
and more complex representations such as simplicial complexes or hypergraphs.
Some of these models have long been proposed, but improvements in computational
power and the advent of new computational techniques have increased their
popularity. Our goal in this paper is to provide a succinct yet comprehensive
summary of the advanced higher-order network analysis techniques. We provide a
systematic review of its foundations and algorithms, along with use cases and
applications of higher-order networks in various scientific domains.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsTotal Completion Time Scheduling Under Scenarioshttp://arxiv.org/abs/2402.19259v1
http://arxiv.org/abs/2402.19259v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Thomas+Bosman">Thomas Bosman</a>, <a href="https://dblp.uni-trier.de/search?q=Martijn+van+Ee">Martijn van Ee</a>, <a href="https://dblp.uni-trier.de/search?q=Ekin+Ergen">Ekin Ergen</a>, <a href="https://dblp.uni-trier.de/search?q=Csanad+Imreh">Csanad Imreh</a>, <a href="https://dblp.uni-trier.de/search?q=Alberto+Marchetti-Spaccamela">Alberto Marchetti-Spaccamela</a>, <a href="https://dblp.uni-trier.de/search?q=Martin+Skutella">Martin Skutella</a>, <a href="https://dblp.uni-trier.de/search?q=Leen+Stougie">Leen Stougie</a></p>Scheduling jobs with given processing times on identical parallel machines so
as to minimize their total completion time is one of the most basic scheduling
problems. We study interesting generalizations of this classical problem
involving scenarios. In our model, a scenario is defined as a subset of a
predefined and fully specified set of jobs. The aim is to find an assignment of
the whole set of jobs to identical parallel machines such that the schedule,
obtained for the given scenarios by simply skipping the jobs not in the
scenario, optimizes a function of the total completion times over all
scenarios.
While the underlying scheduling problem without scenarios can be solved
efficiently by a simple greedy procedure (SPT rule), scenarios, in general,
make the problem NP-hard. We paint an almost complete picture of the evolving
complexity landscape, drawing the line between easy and hard. One of our main
algorithmic contributions relies on a deep structural result on the maximum
imbalance of an optimal schedule, based on a subtle connection to Hilbert bases
of a related convex cone.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsEdit and Alphabet-Ordering Sensitivity of Lex-parsehttp://arxiv.org/abs/2402.19223v1
http://arxiv.org/abs/2402.19223v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Yuto+Nakashima">Yuto Nakashima</a>, <a href="https://dblp.uni-trier.de/search?q=Dominik+K%C3%B6ppl">Dominik Köppl</a>, <a href="https://dblp.uni-trier.de/search?q=Mitsuru+Funakoshi">Mitsuru Funakoshi</a>, <a href="https://dblp.uni-trier.de/search?q=Shunsuke+Inenaga">Shunsuke Inenaga</a>, <a href="https://dblp.uni-trier.de/search?q=Hideo+Bannai">Hideo Bannai</a></p>We investigate the compression sensitivity [Akagi et al., 2023] of lex-parse
[Navarro et al., 2021] for two operations: (1) single character edit and (2)
modification of the alphabet ordering, and give tight upper and lower bounds
for both operations. For both lower bounds, we use the family of Fibonacci
words. For the bounds on edit operations, our analysis makes heavy use of
properties of the Lyndon factorization of Fibonacci words to characterize the
structure of lex-parse.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsComputing Longest Common Subsequence under Cartesian-Tree Matching Modelhttp://arxiv.org/abs/2402.19146v1
http://arxiv.org/abs/2402.19146v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Taketo+Tsujimoto">Taketo Tsujimoto</a>, <a href="https://dblp.uni-trier.de/search?q=Koki+Shibata">Koki Shibata</a>, <a href="https://dblp.uni-trier.de/search?q=Takuya+Mieno">Takuya Mieno</a>, <a href="https://dblp.uni-trier.de/search?q=Yuto+Nakashima">Yuto Nakashima</a>, <a href="https://dblp.uni-trier.de/search?q=Shunsuke+Inenaga">Shunsuke Inenaga</a></p>Two strings of the same length are said to Cartesian-tree match (CT-match) if
their Cartesian-trees are isomorphic [Park et al., TCS 2020]. Cartesian-tree
matching is a natural model that allows for capturing similarities of numerical
sequences. Oizumi et al. [CPM 2022] showed that subsequence pattern matching
under CT-matching model can be solved in polynomial time. This current article
follows and extends this line of research: We present the first polynomial-time
algorithm that finds the longest common subsequence under CT-matching of two
given strings $S$ and $T$ of length $n$, in $O(n^6)$ time and $O(n^4)$ space
for general ordered alphabets. We then show that the problem has a faster
solution in the binary case, by presenting an $O(n^2 / \log n)$-time and space
algorithm.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsRahmani Sort: A Novel Variant of Insertion Sort Algorithm with O(nlogn)
Complexityhttp://arxiv.org/abs/2402.19107v1
http://arxiv.org/abs/2402.19107v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Mohammad+Khalid+Imam+Rahmani">Mohammad Khalid Imam Rahmani</a></p>Various decision support systems are available that implement Data Mining and
Data Warehousing techniques for diving into the sea of data for getting useful
patterns of knowledge (pearls). Classification, regression, clustering, and
many other algorithms are used to enhance the precision and accuracy of the
decision process. So, there is scope for increasing the response time of the
decision process, especially in mission-critical operations. If data are
ordered with suitable and efficient sorting operation, the response time of the
decision process can be minimized. Insertion sort is much more suitable for
such applications due to its simple and straight logic along with its dynamic
nature suitable for list implementation. But it is slower than merge sort and
quick sort. The main reasons this is slow: firstly, a sequential search is used
to find the actual position of the next key element into the sorted left
subarray and secondly, shifting of elements is required by one position towards
the right for accommodating the newly inserted element. Therefore, I propose a
new algorithm by using a novel technique of binary search mechanism for finding
the sorted location of the next key item into the previously sorted left
subarray much quicker than the conventional insertion sort algorithm.
Performance measurement in terms of the actual running time of the new
algorithm has been compared with those of other conventional sorting algorithms
apart from the insertion sort. The results obtained on various sample data show
that the new algorithm is better in performance than the conventional insertion
sort and merge sort algorithms.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsEfficient Processing of Subsequent Densest Subgraph Queryhttp://arxiv.org/abs/2402.18883v1
http://arxiv.org/abs/2402.18883v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Chia-Yang+Hung">Chia-Yang Hung</a>, <a href="https://dblp.uni-trier.de/search?q=Chih-Ya+Shen">Chih-Ya Shen</a></p>Dense subgraph extraction is a fundamental problem in graph analysis and data
mining, aimed at identifying cohesive and densely connected substructures
within a given graph. It plays a crucial role in various domains, including
social network analysis, biological network analysis, recommendation systems,
and community detection. However, extracting a subgraph with the highest node
similarity is a lack of exploration. To address this problem, we studied the
Member Selection Problem and extended it with a dynamic constraint variant. By
incorporating dynamic constraints, our algorithm can adapt to changing
conditions or requirements, allowing for more flexible and personalized
subgraph extraction. This approach enables the algorithm to provide tailored
solutions that meet specific needs, even in scenarios where constraints may
vary over time. We also provide the theoretical analysis to show that our
algorithm is 1/3-approximation. Eventually, the experiments show that our
algorithm is effective and efficient in tackling the member selection problem
with dynamic constraints.</body></html>
2024-03-01 01:00:00 UTCarXiv: Data Structures and AlgorithmsLinkage for leap dayhttps://11011110.github.io/blog/2024/02/29/linkage-leap-day
https://11011110.github.io/blog/2024/02/29/linkage-leap-day.html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<ul>
<li>
<p><a href="https://www.quantamagazine.org/the-mysterious-math-of-billiards-tables-20240215/">The mysterious math of billiards tables</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@divbyzero/111936888315264942">\(\mathbb{M}\)</a>),</span> Dave Richeson in <em>Quanta</em>.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@robinhouston/111947749963212324">Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn!</a> A figure caption from Erin Wolf Chambers’ doctoral dissertation gets cited in WikiQuote. The figure it describes could be of two Great Old Ones osculating.</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2402.10343">Non-adaptive Bellman-Ford: Yen’s improvement is optimal</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111956711529904136">\(\mathbb{M}\)</a>),</span> Jialu Hu and László Kozma. This new preprintimproves<a href="https://arxiv.org/abs/2305.09230"> my paper from last year</a> showing that a naive version of the Bellman–Ford shortest path algorithm that relaxes a predetermined sequence of edges without adapting to the results of previous relaxation steps must take cubic time. Hu and Kozma find the tight constant factor in the cubic bound. This is strong enough to show that, for this problem, randomization truly helps: there is a randomized algorithm (by Bannister & me) that uses fewer relaxation steps than Hu and Kozma’s new deterministic lower bound.</p>
</li>
<li>
<p><a href="https://scholarlykitchen.sspnet.org/2024/02/14/guest-post-the-perplexing-puzzle-of-the-top-2-scientists-list/">Anomalies and inconsistencies in Ioannidis’ “most influential scientists” list</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@monsoon0/111948908376506042">\(\mathbb{M}\)</a>).</span> Including authors with publication dates more than a century after their deaths (or worse, more than a century before their births), non-academic journalists, institutional authors, and prolific self-citers.</p>
</li>
<li>
<p><a href="https://perso.liris.cnrs.fr/lfeuilloley/autre/SIGACT-column.pdf">The Environmental Cost of Our Conferences: The CO2 Emissions
due to Travel at PODC and DISC</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111964919146902988">\(\mathbb{M}\)</a>,</span> <a href="https://discrete-notes.github.io///environmental-cost">via</a>), Laurent Feuilloley and Tijn de Vos, <a href="https://dl.acm.org/doi/10.1145/3639528.3639537">in <em>SIGACT News</em> </a>. Overall recommendations are at the end of section 1: set concrete goals for carbon footprint reduction, start now in varying conference formats to do this rather than continuing to put it off, use a data-driven methodology, and set up a long-term task force to oversee the process.</p>
</li>
<li>
<p><a href="https://www.cgt-journal.org/index.php/cgt"><em>Computing in Geometry and Topology</em></a>, the open access computational geometry journal that I co-edit, <a href="https://dblp.org/db/journals/cgt/index.html">has now been indexed by DBLP</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111970721313407506">\(\mathbb{M}\)</a>).</span></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2307.15996">Locked Polyomino Tilings</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@two_star/111967439902834166">\(\mathbb{M}\)</a>),</span> new preprint by Jamie Tucker-Foltz on a cute recreational mathematics problem inspired by serious research on the mathematics of gerrymandering. An <span style="white-space:nowrap">\(n\)-omino</span> tiling is locked if there is no way to merge two adjacent <span style="white-space:nowrap">\(n\)-ominos</span> and then separate them in a different way into two <span style="white-space:nowrap">\(n\)-ominos</span>. See the linked discussion for the smallest locked tetromino and pentomino tilings, of \(10\times 10\) and \(20\times 20\) squares respectively.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@noneuclideandreamer/111907379202917221">Hexagonal tiling honeycomb</a>. Flythrough of a foam of hexagonally-tiled horospherical bubbles in hyperbolic space.</p>
</li>
<li>
<p><a href="https://tonsky.me/blog/js-bloat">JavaScript bloat in 2024</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@whorfin@mastodon.social/111989213232408672">\(\mathbb{M}\)</a>).</span> I’d love to not use any JavaScript, but that means no mathematical formulas.</p>
</li>
<li>
<p>Another newly promoted Wikipedia Good Article: <a href="https://en.wikipedia.org/wiki/Sch%C3%B6nhardt_polyhedron">Schönhardt polyhedron</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111994101172189544">\(\mathbb{M}\)</a>),</span> a six-vertex concave twisted prism whose diagonals are all outside it, preventing it from being triangulated. I learned while working on this that, when given the correct twist angle (\(30^\circ\)), its edges form a tensegrity structure that was exhibited in 1921 by Latvian-Soviet artist Karlis Johansons, seven years before Schönhardt’s mathematics publication on it.</p>
</li>
<li>
<p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8238192/">The h-index is no longer an effective correlate of scientific reputation</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/111999055505376529">\(\mathbb{M}\)</a>,</span> <a href="https://en.wikipedia.org/wiki/Wikipedia_talk:Notability_(academics)">via</a>), Koltun & Hafner, <em>PLoS One</em>, 2021. According to the authors, in physics, the h-index no longer correlates to other measures of success such as awards from the scientific community, largely because of the huge collaborations that have come to dominate the field and that cause all of their members to have huge h-indexes.</p>
</li>
<li>
<p><a href="https://www.citationneeded.news/become-a-wikipedian-in-30-minutes/">Become a Wikipedia editor in 30 minutes</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@dangillmor@mastodon.social/111993446565117638">\(\mathbb{M}\)</a>).</span> Molly White gives video advice to help Wiki-newbies avoid getting tangled in the Wikipedia bureaucracy and start working to “prevent Wikipedia from crumbling under the weight of AI garbage spewing, and disinformation specialists’ gleeful use of the garbage”.</p>
</li>
<li>
<p><a href="https://mathstodon.xyz/@robinhouston/112008445019668577">An architectural panel in a Chinese restaurant that looks like a Gilbert tessellation</a>, with some speculation on the design principles that could have produced it.</p>
</li>
<li>
<p><a href="https://onlinebooks.library.upenn.edu/webbin/book/browse?type=lcsubc&key=Mathematics%2C%20Physics&c=x">A listing of freely-available online books and journal volumes</a> <span style="white-space:nowrap">(<a href="https://mathstodon.xyz/@11011110/112017929862811536">\(\mathbb{M}\)</a>),</span> possibly helpful to counter <a href="https://mathstodon.xyz/@highergeometer/112000500087734445">academic publishers who demand high access fees for bad scans of old public-domain journal articles</a> and attempt to justify them by the preservation effort that they are clearly not doing, in cases for which the usual Google Scholar searches fail to turn up alternative copies.</p>
</li>
</ul>
<p class="authors">By David Eppstein</p>
</body></html>
2024-02-29 18:44:00 UTCDavid EppsteinThe Soothing Warmth of Vacuum Tubeshttps://www.argmin.net/p/the-soothing-warmth-of-vacuum-tubes
https://www.argmin.net/p/the-soothing-warmth-of-vacuum-tubes
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset">
<picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg" width="1100" height="220" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":220,"width":1100,"resizeWidth":null,"bytes":248902,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":null}' class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d50d00d-b561-48f0-8b0c-88c02b73456e_1100x220.jpeg 1456w" sizes="100vw" fetchpriority="high"></source></picture><div></div>
</div></a></figure></div>
<p>Coming back to this scatter plot of decision systems today, let’s jump all the way to the top-most corner:</p>
<div class="captioned-image-container"><figure><a class="image-link is-viewable-img image2" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png" data-component-name="Image2ToDOM"><div class="image2-inset">
<picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png" width="510" height="527.2350230414746" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1346,"width":1302,"resizeWidth":510,"bytes":212174,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":null}' class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57842f93-7946-4972-8bb4-6f8992e346a0_1302x1346.png 1456w" sizes="100vw"></source></picture><div class="image-link-expand"><svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewbox="0 0 24 24" fill="none" stroke="#FFFFFF" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 "><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div>
</div></a></figure></div>
<p>What happens when you can act instantaneously with maximal impact? This is the realm of old school electrical engineering and the feedback amplifier. Though it’s a bit of a stretch to think of circuits like this as decision systems, they highlight some fundamental issues in a very elementary way. And the mathematics is just middle school algebra (I know this because I ran the post by my fifth grader, and he was into it.).</p>
<p>I have two circuits. The first circuit is a high-gain amplifier. This thing just takes inputs and makes them super loud. But I don’t know how loud the amplifier really is. On any given day, it might change its amplification level by a factor of two or more. And I’ve noticed that the amplifier is sometimes better at amplifying treble than bass, but sometimes it’s the opposite. The amp can make signals a thousand times louder, but it’s unreliable. Should I throw it out?</p>
<p>Maybe not. I have a second component, an attenuator, that takes signals and makes them slightly less loud. I can use the attenuator to make my unruly amplifier well-behaved by connecting the two parts in a feedback loop:</p>
<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png" data-component-name="Image2ToDOM"><div class="image2-inset">
<picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png" width="416" height="196.33268858800773" data-attrs='{"src":"https://substack-post-media.s3.amazonaws.com/public/images/a48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":488,"width":1034,"resizeWidth":416,"bytes":39121,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":null}' class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa48edc43-3771-4520-9e17-71c4d841abe1_1034x488.png 1456w" sizes="100vw"></source></picture><div></div>
</div></a></figure></div>
<p>I can write what this circuit does as a few simple equations that link the inputs to the outputs. The amplifier is driven by the signal u. The voltage output of the amplifier is the amplifier gain times the input signal. This is the equation:</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"V_\\mathrm{out} = A \\cdot u","id":"NFDWSABOHM"}' data-component-name="LatexBlockToDOM"></div>
<p>“A” is the amplifier gain I don’t know particularly well. The attenuator works by taking its input and reducing it by a factor of B. The corresponding equation of this effect is</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"z = B \\cdot V_\\mathrm{out}","id":"KVBRYZYQTD"}' data-component-name="LatexBlockToDOM"></div>
<p>Finally, the feedback interconnection rule subtracts z from the input Voltage to produce the input to the amplifier.</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"u = V_\\mathrm{in} - z","id":"FWIRPPBSRZ"}' data-component-name="LatexBlockToDOM"></div>
<p>We can combine these three equations into one, eliminating the variables u and z:</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"V_{\\mathrm{out}} = A (V_{\\mathrm{in}} - B \\cdot V_{\\mathrm{out}})","id":"WJNUXHJIDZ"}' data-component-name="LatexBlockToDOM"></div>
<p>Now I can solve for Vout and get the final expression:</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"V_\\mathrm{out} = \\frac{A}{1+AB} \\cdot V_{\\mathrm{in}}","id":"EXFCZMNPNQ"}' data-component-name="LatexBlockToDOM"></div>
<p>Let’s stare at this formula for a bit to see what it implies. First, as we make A larger, the gain from V<sub>in</sub> to V<sub>out</sub> approaches an asymptote. For super large values of A, the gain is basically just 1/B. But this feedback loop is very insensitive to the actual value of A. Suppose the attenuation factor B is ½. Then, when A is large enough, the gain should be around a factor of 2. We can plug in specific numbers and see a remarkable range of stable behavior. If A = 10,000, the gain from V<sub>in</sub> to V<sub>out</sub> is 1.9996. If A = 20,000, the gain is 1.9998. If A = 5,000, the gain is 1.9992. Over a huge range of open loop gains, from 400 to infinity, the gain of this system is within less than 1% of the ideal gain. Despite vast differences in open loop behavior, the closed loop behavior is predictably the same for all of these different amplifiers.</p>
<p>It’s even better than this. I told you earlier that the amplifier was temperamental and might change its amplification on any given day or might amplify different signals to different levels. Suppose that the amplifier amplifies one signal by 10,000 and one by 20,000. Both signals will be amplified by 2 when put through the feedback circuit. We mitigate all sorts of uncertainty in one component with the simple negative feedback law.</p>
<p>So what’s the catch? Though we are insensitive to the amplifier, we’re very sensitive to the system we’re using to control the amplifier. The output gain is very sensitive to the attenuation factor B. If the attenuator changes by a factor of 2 from ½ to ¼, then the closed loop gain changes from 2 to 4. The attenuation circuit can be precision manufactured if a precise amplification is necessary. But variability is often desirable, as you might have a knob that changes the attenuation value. This would give you a volume knob. Turn it up to 11. </p>
<p>There are less obvious, more dangerous issues. The feedback equations I wrote above assume the feedback occurs instantaneously. This is why I said we’re in the upper right corner of the scatter plot. But what if we can’t act immediately and there are delays between measuring the output of the amplifier and computing the feedback signal? Rather than a nice and simple formula for amplifier gain, we get stuck with an equation that looks like</p>
<div class="latex-rendered" data-attrs='{"persistentExpression":"V_{\\mathrm{out}}(t)= A \\cdot V_{\\mathrm{in}}(t) - A B \\cdot V_{\\mathrm{out}}(t-d)","id":"LELIXURHYT"}' data-component-name="LatexBlockToDOM"></div>
<p>This equation doesn’t have a clean analytic solution. Still, you can imagine what might happen: if Vin is changing at a rate comparable to the delay time, we might be effectively adding to the signal instead of subtracting. These errors can compound, and the signal might get amplified to arbitrarily high gains, ruining the amplifier. Control engineers are always wary of time delays and their pernicious effects, and the patches for dealing with them are quite complicated and non-elementary.</p>
<p>This simple amplifier example is instructive. By designing an actuator that acts instantaneously, the analysis becomes solving a simple equation. A barely quantified system can be made into a well-controlled, useful mechanism. But there is no free lunch. Seemingly innocuous delays can bring the whole system off the rails.</p>
<p>Let me bring this back to decision making. The amplifier example is a bit contrived as we don’t think of electronic signal levels as “decisions.” But the ideas here generalize to a much wider set of general feedback rules. This general theory might be harder to explain in such clear terms, but let me attempt a summary in the next post.</p>
<p class="button-wrapper" data-attrs='{"url":"https://www.argmin.net/subscribe?","text":"Subscribe now","action":null,"class":null}' data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.argmin.net/subscribe?"><span>Subscribe now</span></a></p>
<p class="authors">By Ben Recht</p>
</body></html>
2024-02-29 16:01:11 UTCBen RechtTR24-040 | Randomness Extractors in $\mathrm{AC}^0$ and $\mathrm{NC}^1$: Optimal up to Constant Factors |
Ruiyang Wu,
Kuan Chenghttps://eccc.weizmann.ac.il/report/2024/040
https://eccc.weizmann.ac.il/report/2024/040
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><p>We study extractors computable in uniform $\mathrm{AC}^0$ and uniform $\mathrm{NC}^1$.
For the $\mathrm{AC}^0$ setting, we give a construction such that for every $k \ge n/ \mathrm{poly} \log n, \eps \ge 2^{-\mathrm{poly} \log n}$, it can extract $(1-\gamma)k$ randomness from an $(n, k)$ source for an arbitrary constant $\gamma$, with seed length $O(\log \frac{n}{\epsilon})$. The output length and seed length are optimal up to constant factors matching the parameters of the best polynomial time construction such as [GUV09]. The range of $k$ and $\epsilon$ almost meets the lower bound in [GVW15] and [CL18].We also generalize the main lower bound of [GVW15] for extractors in $\mathrm{AC}^0$, showing that when $k < n/ \mathrm{poly} \log n$, even strong dispersers do not exist in $\mathrm{AC}^0$.
For the $\mathrm{NC}^1$ setting, we also give a construction with seed length $O(\log \frac{n}{\epsilon})$ and a small constant fraction entropy loss in the output. The construction works for every $k \ge O(\log^2 n), \epsilon\ge 2^{-O(\sqrt{k})}$. To our knowledge the previous best $\mathrm{NC}^1$ construction is Trevisan's extractor [Tre01] and its improved version[RRV02] which have seed lengths $\mathrm{poly} \log \frac{n}{\epsilon}$.
Our main techniques include a new error reduction process and a new output stretch process based on low depth circuits implementations for mergers from [DKSS13], condensers from [KT22] and somewhere extractors from [Ta-98].</p></body></html>
2024-02-29 15:15:51 UTCECCC PapersTR24-039 | Optimal PSPACE-hardness of Approximating Set Cover Reconfiguration |
Shuichi Hirahara,
Naoto Ohsakahttps://eccc.weizmann.ac.il/report/2024/039
https://eccc.weizmann.ac.il/report/2024/039
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><p>In the Minmax Set Cover Reconfiguration problem, given a set system $\mathcal{F}$ over a universe and its two covers $\mathcal{C}^\mathrm{start}$ and $\mathcal{C}^\mathrm{goal}$ of size $k$, we wish to transform $\mathcal{C}^\mathrm{start}$ into $\mathcal{C}^\mathrm{goal}$ by repeatedly adding or removing a single set of $\mathcal{F}$ while covering the universe in any intermediate state. Then, the objective is to minimize the maximize size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are PSPACE-hard to approximate within a factor of $2-\frac{1}{\mathrm{polyloglog} N}$, where $N$ is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) and Karthik C. S. and Manurangsi (2023). This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a $2$-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011). Our proof is based on a reconfiguration analogue of the FGLSS reduction from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (2024). We also prove that for any constant $\varepsilon \in (0,1)$, Minmax Hypergraph Vertex Cover Reconfiguration on $\mathrm{poly}(\varepsilon^{-1})$-uniform hypergraphs is PSPACE-hard to approximate within a factor of $2-\varepsilon$.</p></body></html>
2024-02-29 12:17:24 UTCECCC PapersTR24-038 | Polynomial Calculus for Quantified Boolean Logic: Lower Bounds through Circuits and Degree |
Olaf Beyersdorff,
Kaspar Kasche,
Luc Nicolas Spachmannhttps://eccc.weizmann.ac.il/report/2024/038
https://eccc.weizmann.ac.il/report/2024/038
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><p>We initiate an in-depth proof-complexity analysis of polynomial calculus (Q-PC) for Quantified Boolean Formulas (QBF). In the course of this we establish a tight proof-size characterisation of Q-PC in terms of a suitable circuit model (polynomial decision lists). Using this correspondence we show a size-degree relation for Q-PC, similar in spirit, yet different from the classic size-degree formula for propositional PC by Impagliazzo, Pudlák and Sgall (1999).
We use the circuit characterisation together with the size-degree relation to obtain various new lower bounds on proof size in Q-PC. This leads to incomparability results for Q-PC systems over different fields.</p></body></html>
2024-02-29 06:39:39 UTCECCC PapersDIMACS Tutorial on Fine-grained Complexityhttp://cstheory-events.org/2024/02/29/dimacs-tutorial-on-fine-grained-complexity/
https://cstheory-events.org/2024/02/29/dimacs-tutorial-on-fine-grained-complexity/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>July 15-19, 2024 DIMACS, New Jersey, USA http://dimacs.rutgers.edu/events/details?eID=2764 Submission deadline: March 28, 2024 Registration deadline: March 25, 2024 DIMACS is organizing a tutorial in Fine-grained complexity in July 2024. The tutorial is primarily for graduate students working on topics in and around theoretical computer science (TCS) who are not already familiar with fine-grained complexity. Students … <a href="https://cstheory-events.org/2024/02/29/dimacs-tutorial-on-fine-grained-complexity/" class="more-link">Continue reading <span class="screen-reader-text">DIMACS Tutorial on Fine-grained Complexity</span></a></p>
<p class="authors">By shacharlovett</p>
</body></html>
2024-02-29 06:01:56 UTCCS Theory EventsCounting points with Riemann-Roch formulashttp://arxiv.org/abs/2402.18193v1
http://arxiv.org/abs/2402.18193v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Jorge+Mart%C3%ADn-Morales">Jorge Martín-Morales</a></p>We provide an algorithm for computing the number of integral points lying in
certain triangles that do not have integral vertices. We use techniques from
Algebraic Geometry such as the Riemann-Roch formula for weighted projective
planes and resolution of singularities. We analyze the complexity of the method
and show that the worst case is given by the Fibonacci sequence. At the end of
the manuscript a concrete example is developed in detail where the interplay
with other invariants of singularity theory is also treated.</body></html>
2024-02-29 01:00:00 UTCarXiv: Computational ComplexityEnhancing Roadway Safety: LiDAR-based Tree Clearance Analysishttp://arxiv.org/abs/2402.18309v1
http://arxiv.org/abs/2402.18309v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Miriam+Louise+Carnot">Miriam Louise Carnot</a>, <a href="https://dblp.uni-trier.de/search?q=Eric+Peukert">Eric Peukert</a>, <a href="https://dblp.uni-trier.de/search?q=Bogdan+Franczyk">Bogdan Franczyk</a></p>In the efforts for safer roads, ensuring adequate vertical clearance above
roadways is of great importance. Frequently, trees or other vegetation is
growing above the roads, blocking the sight of traffic signs and lights and
posing danger to traffic participants. Accurately estimating this space from
simple images proves challenging due to a lack of depth information. This is
where LiDAR technology comes into play, a laser scanning sensor that reveals a
three-dimensional perspective. Thus far, LiDAR point clouds at the street level
have mainly been used for applications in the field of autonomous driving.
These scans, however, also open up possibilities in urban management. In this
paper, we present a new point cloud algorithm that can automatically detect
those parts of the trees that grow over the street and need to be trimmed. Our
system uses semantic segmentation to filter relevant points and downstream
processing steps to create the required volume to be kept clear above the road.
Challenges include obscured stretches of road, the noisy unstructured nature of
LiDAR point clouds, and the assessment of the road shape. The identified points
of non-compliant trees can be projected from the point cloud onto images,
providing municipalities with a visual aid for dealing with such occurrences.
By automating this process, municipalities can address potential road space
constraints, enhancing safety for all. They may also save valuable time by
carrying out the inspections more systematically. Our open-source code gives
communities inspiration on how to automate the process themselves.</body></html>
2024-02-29 01:00:00 UTCarXiv: Computational GeometryA One-step Image Retargeing Algorithm Based on Conformal Energyhttp://arxiv.org/abs/2402.18074v1
http://arxiv.org/abs/2402.18074v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Chengyang+Liu">Chengyang Liu</a>, <a href="https://dblp.uni-trier.de/search?q=Michael+K.+Ng">Michael K. Ng</a></p>The image retargeting problem is to find a proper mapping to resize an image
to one with a prescribed aspect ratio, which is quite popular these days. In
this paper, we propose an efficient and orientation-preserving one-step image
retargeting algorithm based on minimizing the harmonic energy, which can well
preserve the regions of interest (ROIs) and line structures in the image. We
also give some mathematical proofs in the paper to ensure the well-posedness
and accuracy of our algorithm.</body></html>
2024-02-29 01:00:00 UTCarXiv: Computational GeometryOn the Parameterized Complexity of Motion Planning for Rectangular
Robotshttp://arxiv.org/abs/2402.17846v1
http://arxiv.org/abs/2402.17846v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Iyad+Kanj">Iyad Kanj</a>, <a href="https://dblp.uni-trier.de/search?q=Salman+Parsa">Salman Parsa</a></p>We study computationally-hard fundamental motion planning problems where the
goal is to translate $k$ axis-aligned rectangular robots from their initial
positions to their final positions without collision, and with the minimum
number of translation moves. Our aim is to understand the interplay between the
number of robots and the geometric complexity of the input instance measured by
the input size, which is the number of bits needed to encode the coordinates of
the rectangles' vertices. We focus on axis-aligned translations, and more
generally, translations restricted to a given set of directions, and we study
the two settings where the robots move in the free plane, and where they are
confined to a bounding box. We obtain fixed-parameter tractable (FPT)
algorithms parameterized by $k$ for all the settings under consideration. In
the case where the robots move serially (i.e., one in each time step) and
axis-aligned, we prove a structural result stating that every problem instance
admits an optimal solution in which the moves are along a grid, whose size is a
function of $k$, that can be defined based on the input instance. This
structural result implies that the problem is fixed-parameter tractable
parameterized by $k$. We also consider the case in which the robots move in
parallel (i.e., multiple robots can move during the same time step), and which
falls under the category of Coordinated Motion Planning problems. Finally, we
show that, when the robots move in the free plane, the FPT results for the
serial motion case carry over to the case where the translations are restricted
to any given set of directions.</body></html>
2024-02-29 01:00:00 UTCarXiv: Computational GeometryFractional Linear Matroid Matching is in quasi-NChttp://arxiv.org/abs/2402.18276v1
http://arxiv.org/abs/2402.18276v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Rohit+Gurjar">Rohit Gurjar</a>, <a href="https://dblp.uni-trier.de/search?q=Taihei+Oki">Taihei Oki</a>, <a href="https://dblp.uni-trier.de/search?q=Roshan+Raj">Roshan Raj</a></p>The matching and linear matroid intersection problems are solvable in
quasi-NC, meaning that there exist deterministic algorithms that run in
polylogarithmic time and use quasi-polynomially many parallel processors.
However, such a parallel algorithm is unknown for linear matroid matching,
which generalizes both of these problems. In this work, we propose a quasi-NC
algorithm for fractional linear matroid matching, which is a relaxation of
linear matroid matching and commonly generalizes fractional matching and linear
matroid intersection. Our algorithm builds upon the connection of fractional
matroid matching to non-commutative Edmonds' problem recently revealed by Oki
and Soma~(2023). As a corollary, we also solve black-box non-commutative
Edmonds' problem with rank-two skew-symmetric coefficients.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsMax-Cut with $ε$-Accurate Predictionshttp://arxiv.org/abs/2402.18263v1
http://arxiv.org/abs/2402.18263v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Vincent+Cohen-Addad">Vincent Cohen-Addad</a>, <a href="https://dblp.uni-trier.de/search?q=Tommaso+d%27Orsi">Tommaso d'Orsi</a>, <a href="https://dblp.uni-trier.de/search?q=Anupam+Gupta">Anupam Gupta</a>, <a href="https://dblp.uni-trier.de/search?q=Euiwoong+Lee">Euiwoong Lee</a>, <a href="https://dblp.uni-trier.de/search?q=Debmalya+Panigrahi">Debmalya Panigrahi</a></p>We study the approximability of the MaxCut problem in the presence of
predictions. Specifically, we consider two models: in the noisy predictions
model, for each vertex we are given its correct label in $\{-1,+1\}$ with some
unknown probability $1/2 + \epsilon$, and the other (incorrect) label
otherwise. In the more-informative partial predictions model, for each vertex
we are given its correct label with probability $\epsilon$ and no label
otherwise. We assume only pairwise independence between vertices in both
models.
We show how these predictions can be used to improve on the worst-case
approximation ratios for this problem. Specifically, we give an algorithm that
achieves an $\alpha + \widetilde{\Omega}(\epsilon^4)$-approximation for the
noisy predictions model, where $\alpha \approx 0.878$ is the MaxCut threshold.
While this result also holds for the partial predictions model, we can also
give a $\beta + \Omega(\epsilon)$-approximation, where $\beta \approx 0.858$ is
the approximation ratio for MaxBisection given by Raghavendra and Tan. This
answers a question posed by Ola Svensson in his plenary session talk at
SODA'23.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsPolynomial-time approximation schemes for induced subgraph problems on
fractionally tree-independence-number-fragile graphshttp://arxiv.org/abs/2402.18352v1
http://arxiv.org/abs/2402.18352v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Esther+Galby">Esther Galby</a>, <a href="https://dblp.uni-trier.de/search?q=Andrea+Munaro">Andrea Munaro</a>, <a href="https://dblp.uni-trier.de/search?q=Shizhou+Yang">Shizhou Yang</a></p>We investigate a relaxation of the notion of fractional treewidth-fragility,
namely fractional tree-independence-number-fragility. In particular, we obtain
polynomial-time approximation schemes for meta-problems such as finding a
maximum-weight sparse induced subgraph satisfying a given $\mathsf{CMSO}_2$
formula on fractionally tree-independence-number-fragile graph classes. Our
approach unifies and extends several known polynomial-time approximation
schemes on seemingly unrelated graph classes, such as classes of intersection
graphs of fat objects in a fixed dimension or proper minor-closed classes. We
also study the related notion of layered tree-independence number, a relaxation
of layered treewidth, and its applications to exact subexponential-time
algorithms.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsDynamic Deterministic Constant-Approximate Distance Oracles with
$n^ε$ Worst-Case Update Timehttp://arxiv.org/abs/2402.18541v1
http://arxiv.org/abs/2402.18541v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Bernhard+Haeupler">Bernhard Haeupler</a>, <a href="https://dblp.uni-trier.de/search?q=Yaowei+Long">Yaowei Long</a>, <a href="https://dblp.uni-trier.de/search?q=Thatchaphol+Saranurak">Thatchaphol Saranurak</a></p>We present a new distance oracle in the fully dynamic setting: given a
weighted undirected graph $G=(V,E)$ with $n$ vertices undergoing both edge
insertions and deletions, and an arbitrary parameter $\epsilon$ where
$1/\log^{c} n<\epsilon<1$ and $c>0$ is a small constant, we can
deterministically maintain a data structure with $n^{\epsilon}$ worst-case
update time that, given any pair of vertices $(u,v)$, returns a $2^{{\rm
poly}(1/\epsilon)}$-approximate distance between $u$ and $v$ in ${\rm
poly}(1/\epsilon)\log\log n$ query time.
Our algorithm significantly advances the state-of-the-art in two aspects,
both for fully dynamic algorithms and even decremental algorithms. First, no
existing algorithm with worst-case update time guarantees a
$o(n)$-approximation while also achieving an $n^{2-\Omega(1)}$ update and
$n^{o(1)}$ query time, while our algorithm offers a constant
$O_{\epsilon}(1)$-approximation with $n^{\epsilon}$ update time and
$O_{\epsilon}(\log \log n)$ query time. Second, even if amortized update time
is allowed, it is the first deterministic constant-approximation algorithm with
$n^{1-\Omega(1)}$ update and query time. The best result in this direction is
the recent deterministic distance oracle by Chuzhoy and Zhang [STOC 2023] which
achieves an approximation of $(\log\log n)^{2^{O(1/\epsilon^{3})}}$ with
amortized update time of $n^{\epsilon}$ and query time of $2^{{\rm
poly}(1/\epsilon)}\log n\log\log n$.
We obtain the result by dynamizing tools related to length-constrained
expanders [Haeupler-R\"acke-Ghaffari, STOC 2022; Haeupler-Hershkowitz-Tan,
2023; Haeupler-Huebotter-Ghaffari, 2022]. Our technique completely bypasses the
40-year-old Even-Shiloach tree, which has remained the most pervasive tool in
the area but is inherently amortized.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsOn the enumeration of signatures of XOR-CNF'shttp://arxiv.org/abs/2402.18537v1
http://arxiv.org/abs/2402.18537v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Nadia+Creignou">Nadia Creignou</a>, <a href="https://dblp.uni-trier.de/search?q=Oscar+Defrain">Oscar Defrain</a>, <a href="https://dblp.uni-trier.de/search?q=Fr%C3%A9d%C3%A9ric+Olive">Frédéric Olive</a>, <a href="https://dblp.uni-trier.de/search?q=Simon+Vilmin">Simon Vilmin</a></p>Given a CNF formula $\varphi$ with clauses $C_1, \dots, C_m$ over a set of
variables $V$, a truth assignment $\mathbf{a} : V \to \{0, 1\}$ generates a
binary sequence $\sigma_\varphi(\mathbf{a})=(C_1(\mathbf{a}), \ldots,
C_m(\mathbf{a}))$, called a signature of $\varphi$, where $C_i(\mathbf{a})=1$
if clause $C_i$ evaluates to 1 under assignment $\mathbf{a}$, and
$C_i(\mathbf{a})=0$ otherwise. Signatures and their associated generation
problems have given rise to new yet promising research questions in algorithmic
enumeration. In a recent paper, B\'erczi et al. interestingly proved that
generating signatures of a CNF is tractable despite the fact that verifying a
solution is hard. They also showed the hardness of finding maximal signatures
of an arbitrary CNF due to the intractability of satisfiability in general.
Their contribution leaves open the problem of efficiently generating maximal
signatures for tractable classes of CNFs, i.e., those for which satisfiability
can be solved in polynomial time. Stepping into that direction, we completely
characterize the complexity of generating all, minimal, and maximal signatures
for XOR-CNFs.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsInterval-Constrained Bipartite Matching over Timehttp://arxiv.org/abs/2402.18469v1
http://arxiv.org/abs/2402.18469v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Andreas+Abels">Andreas Abels</a>, <a href="https://dblp.uni-trier.de/search?q=Mariia+Anapolska">Mariia Anapolska</a></p>Interval-constrained online bipartite matching problem frequently occurs in
medical appointment scheduling: unit-time jobs representing patients arrive
online and are assigned to a time slot within their given time interval. We
consider a variant of this problem where reassignments are allowed and extend
it by a notion of current time, which is decoupled from the job arrival events.
As jobs appear, the current point in time gradually advances. Jobs that are
assigned to the current time unit become processed, which fixes part of the
matching and disables these jobs or slots for reassignments in future steps. We
refer to these time-dependent restrictions on reassignments as the over-time
property.
We show that FirstFit with reassignments according to the shortest augmenting
path rule is $\frac{2}{3}$-competitive with respect to the matching
cardinality, and that the bound is tight. Interestingly, this bound holds even
if the number of reassignments per job is bound by a constant. For the number
of reassignments performed by the algorithm, we show that it is in $\Omega(n
\log n)$ in the worst case, where $n$ is the number of patients or jobs on the
online side. This result is in line with lower bounds for the number of
reassignments in online bipartite matching with reassignments, and, similarly
to this previous work, we also conjecture that this bound should be tight.
Known upper bounds like the $O(n \log^2 n)$ for online bipartite matching with
reassignments by Bernstein, Holm, and Rotenberg do not transfer directly: while
our interval constraints simplify the problem, the over-time property restricts
the set of possible reassignments.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsDynaWarp -- Efficient, large-scale log storage and retrievalhttp://arxiv.org/abs/2402.18355v1
http://arxiv.org/abs/2402.18355v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Julian+Reichinger">Julian Reichinger</a>, <a href="https://dblp.uni-trier.de/search?q=Thomas+Krismayer">Thomas Krismayer</a>, <a href="https://dblp.uni-trier.de/search?q=Jan+Rellermeyer">Jan Rellermeyer</a></p>Modern, large scale monitoring systems have to process and store vast amounts
of log data in near real-time. At query time the systems have to find relevant
logs based on the content of the log message using support structures that can
scale to these amounts of data while still being efficient to use. We present
our novel DynaWarp membership sketch, capable of answering Multi-Set
Multi-Membership-Queries, that can be used as an alternative to existing
indexing structures for streamed log data. In our experiments, DynaWarp
required up to 93% less storage space than the tested state-of-the-art inverted
index and had up to four orders of magnitude less false-positives than the
tested state-of-the-art membership sketch. Additionally, DynaWarp achieved up
to 250 times higher query throughput than the tested inverted index and up to
240 times higher query throughput than the tested membership sketch.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsOnline Edge Coloring is (Nearly) as Easy as Offlinehttp://arxiv.org/abs/2402.18339v1
http://arxiv.org/abs/2402.18339v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Joakim+Blikstad">Joakim Blikstad</a>, <a href="https://dblp.uni-trier.de/search?q=Ola+Svensson">Ola Svensson</a>, <a href="https://dblp.uni-trier.de/search?q=Radu+Vintan">Radu Vintan</a>, <a href="https://dblp.uni-trier.de/search?q=David+Wajc">David Wajc</a></p>The classic theorem of Vizing (Diskret. Analiz.'64) asserts that any graph of
maximum degree $\Delta$ can be edge colored (offline) using no more than
$\Delta+1$ colors (with $\Delta$ being a trivial lower bound). In the online
setting, Bar-Noy, Motwani and Naor (IPL'92) conjectured that a
$(1+o(1))\Delta$-edge-coloring can be computed online in $n$-vertex graphs of
maximum degree $\Delta=\omega(\log n)$. Numerous algorithms made progress on
this question, using a higher number of colors or assuming restricted arrival
models, such as random-order edge arrivals or vertex arrivals (e.g., AGKM
FOCS'03, BMM SODA'10, CPW FOCS'19, BGW SODA'21, KLSST STOC'22). In this work,
we resolve this longstanding conjecture in the affirmative in the most general
setting of adversarial edge arrivals. We further generalize this result to
obtain online counterparts of the list edge coloring result of Kahn (J. Comb.
Theory. A'96) and of the recent "local" edge coloring result of Christiansen
(STOC'23).</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsOutput-Sensitive Enumeration of Potential Maximal Cliques in Polynomial
Spacehttp://arxiv.org/abs/2402.18265v1
http://arxiv.org/abs/2402.18265v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Caroline+Brosse">Caroline Brosse</a>, <a href="https://dblp.uni-trier.de/search?q=Alessio+Conte">Alessio Conte</a>, <a href="https://dblp.uni-trier.de/search?q=Vincent+Limouzy">Vincent Limouzy</a>, <a href="https://dblp.uni-trier.de/search?q=Giulia+Punzi">Giulia Punzi</a>, <a href="https://dblp.uni-trier.de/search?q=Davide+Rucci">Davide Rucci</a></p>A set of vertices in a graph forms a potential maximal clique if there exists
a minimal chordal completion in which it is a maximal clique. Potential maximal
cliques were first introduced as a key tool to obtain an efficient, though
exponential-time algorithm to compute the treewidth of a graph. As a byproduct,
this allowed to compute the treewidth of various graph classes in polynomial
time.
In recent years, the concept of potential maximal cliques regained interest
as it proved to be useful for a handful of graph algorithmic problems. In
particular, it turned out to be a key tool to obtain a polynomial time
algorithm for computing maximum weight independent sets in $P_5$-free and
$P_6$-free graphs (Lokshtanov et al., SODA `14 and Grzeskik et al., SODA `19.
In most of their applications, obtaining all the potential maximal cliques
constitutes an algorithmic bottleneck, thus motivating the question of how to
efficiently enumerate all the potential maximal cliques in a graph $G$.
The state-of-the-art algorithm by Bouchitt\'e \& Todinca can enumerate
potential maximal cliques in output-polynomial time by using exponential space,
a significant limitation for the size of feasible instances. In this paper, we
revisit this algorithm and design an enumeration algorithm that preserves an
output-polynomial time complexity while only requiring polynomial space.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsLower Bounds for Leaf Rank of Leaf Powershttp://arxiv.org/abs/2402.18245v1
http://arxiv.org/abs/2402.18245v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Svein+H%C3%B8gemo">Svein Høgemo</a></p>Leaf powers and $k$-leaf powers have been studied for over 20 years, but
there are still several aspects of this graph class that are poorly understood.
One such aspect is the leaf rank of leaf powers, i.e. the smallest number $k$
such that a graph $G$ is a $k$-leaf power. Computing the leaf rank of leaf
powers has proved a hard task, and furthermore, results about the asymptotic
growth of the leaf rank as a function of the number of vertices in the graph
have been few and far between. We present an infinite family of rooted directed
path graphs that are leaf powers, and prove that they have leaf rank
exponential in the number of vertices (utilizing a type of subtree model first
presented by Rautenbach [Some remarks about leaf roots. Discrete mathematics,
2006]). This answers an open question by Brandst\"adt et al. [Rooted directed
path graphs are leaf powers. Discrete mathematics, 2010].</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsComputing Minimal Absent Words and Extended Bispecial Factors with CDAWG
Spacehttp://arxiv.org/abs/2402.18090v1
http://arxiv.org/abs/2402.18090v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Shunsuke+Inenaga">Shunsuke Inenaga</a>, <a href="https://dblp.uni-trier.de/search?q=Takuya+Mieno">Takuya Mieno</a>, <a href="https://dblp.uni-trier.de/search?q=Hiroki+Arimura">Hiroki Arimura</a>, <a href="https://dblp.uni-trier.de/search?q=Mitsuru+Funakoshi">Mitsuru Funakoshi</a>, <a href="https://dblp.uni-trier.de/search?q=Yuta+Fujishige">Yuta Fujishige</a></p>A string $w$ is said to be a minimal absent word (MAW) for a string $S$ if
$w$ does not occur in $S$ and any proper substring of $w$ occurs in $S$. We
focus on non-trivial MAWs which are of length at least 2. Finding such
non-trivial MAWs for a given string is motivated for applications in
bioinformatics and data compression. Fujishige et al. [TCS 2023] proposed a
data structure of size $\Theta(n)$ that can output the set $\mathsf{MAW}(S)$ of
all MAWs for a given string $S$ of length $n$ in $O(n + |\mathsf{MAW}(S)|)$
time, based on the directed acyclic word graph (DAWG). In this paper, we
present a more space efficient data structure based on the compact DAWG
(CDAWG), which can output $\mathsf{MAW}(S)$ in $O(|\mathsf{MAW}(S)|)$ time with
$O(e)$ space, where $e$ denotes the minimum of the sizes of the CDAWGs for $S$
and for its reversal $S^R$. For any strings of length $n$, it holds that $e <
2n$, and for highly repetitive strings $e$ can be sublinear (up to logarithmic)
in $n$. We also show that MAWs and their generalization minimal rare words have
close relationships with extended bispecial factors, via the CDAWG.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsTighter Bounds for Local Differentially Private Core Decomposition and
Densest Subgraphhttp://arxiv.org/abs/2402.18020v1
http://arxiv.org/abs/2402.18020v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Monika+Henzinger">Monika Henzinger</a>, <a href="https://dblp.uni-trier.de/search?q=A.+R.+Sricharan">A. R. Sricharan</a>, <a href="https://dblp.uni-trier.de/search?q=Leqi+Zhu">Leqi Zhu</a></p>Computing the core decomposition of a graph is a fundamental problem that has
recently been studied in the differentially private setting, motivated by
practical applications in data mining. In particular, Dhulipala et al. [FOCS
2022] gave the first mechanism for approximate core decomposition in the
challenging and practically relevant setting of local differential privacy. One
of the main open problems left by their work is whether the accuracy, i.e., the
approximation ratio and additive error, of their mechanism can be improved. We
show the first lower bounds on the additive error of approximate and exact core
decomposition mechanisms in the centralized and local model of differential
privacy, respectively. We also give mechanisms for exact and approximate core
decomposition in the local model, with almost matching additive error bounds.
Our mechanisms are based on a black-box application of continual counting. They
also yield improved mechanisms for the approximate densest subgraph problem in
the local model.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsDecremental $(1+ε)$-Approximate Maximum Eigenvector: Dynamic
Power Methodhttp://arxiv.org/abs/2402.17929v1
http://arxiv.org/abs/2402.17929v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Deeksha+Adil">Deeksha Adil</a>, <a href="https://dblp.uni-trier.de/search?q=Thatchaphol+Saranurak">Thatchaphol Saranurak</a></p>We present a dynamic algorithm for maintaining $(1+\epsilon)$-approximate
maximum eigenvector and eigenvalue of a positive semi-definite matrix $A$
undergoing \emph{decreasing} updates, i.e., updates which may only decrease
eigenvalues. Given a vector $v$ updating $A\gets A-vv^{\top}$, our algorithm
takes $\tilde{O}(\mathrm{nnz}(v))$ amortized update time, i.e., polylogarithmic
per non-zeros in the update vector.
Our technique is based on a novel analysis of the influential power method in
the dynamic setting. The two previous sets of techniques have the following
drawbacks (1) algebraic techniques can maintain exact solutions but their
update time is at least polynomial per non-zeros, and (2) sketching techniques
admit polylogarithmic update time but suffer from a crude additive
approximation.
Our algorithm exploits an oblivious adversary. Interestingly, we show that
any algorithm with polylogarithmic update time per non-zeros that works against
an adaptive adversary and satisfies an additional natural property would imply
a breakthrough for checking psd-ness of matrices in $\tilde{O}(n^{2})$ time,
instead of $O(n^{\omega})$ time.</body></html>
2024-02-29 01:00:00 UTCarXiv: Data Structures and AlgorithmsMeta-Complexity: A Basic Introduction for the Meta-Perplexedhttps://blog.simons.berkeley.edu/?p=980
https://blog.simons.berkeley.edu/2024/02/meta-complexity-a-basic-introduction-for-the-meta-perplexed/
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p>by Adam Becker (science communicator in residence, Spring 2023) Think about the last time you faced a problem you couldn’t solve. Say it was something practical, something that seemed small — a leaky faucet, for example. There’s an exposed screw … <a href="https://blog.simons.berkeley.edu/2024/02/meta-complexity-a-basic-introduction-for-the-meta-perplexed/">Continue reading <span class="meta-nav">→</span></a></p>
<p class="authors">By Simons Institute Editor</p>
</body></html>
2024-02-28 17:26:47 UTCSimons Institute BlogA Quantum Statetag:blogger.com,1999:blog-3722233.post-7912911629140519727
https://blog.computationalcomplexity.org/2024/02/a-quantum-state.html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p></p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s1024/Quantum%20Abe.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1024" data-original-width="1024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR5xUTVIOdCNDTK9Y0SG_ObviNOlIuqf03LCogPLNBIWo_TyvztTtSjzHLVgfF2oxyN7PdmpfVSK2AWy_uDmjkn6nez1aqJ4ffvHQFQe3RFnL4m0J428Lkz3Avx6tsVq3vHnQsCJiR0iG0R73TBR8-qsbtFY-5xWQTMiNGpc9Lvt-trMy3-cvb/s320/Quantum%20Abe.jpg" width="320"></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Illinois' most famous citizen working on a quantum computer</td></tr>
</tbody></table>
<br>The governor of Illinois, JB Pritzker, unveiled his budget last week including <a href="https://www.axios.com/2024/02/21/illinois-jb-pritzker-quantum-computing-semiconductors">$500 million for quantum computing research</a>. Is this the best way to spend my tax dollars?<p>As long-time readers know, I have <a href="https://blog.computationalcomplexity.org/2021/04/quantum-stories.html">strong doubts</a> about the real-world applications of quantum computing and the hype for the field. But the article does not suggest any applications of quantum computing, rather</p>
<blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Pritzker says he's optimistic that the Illinois state legislature will embrace his proposal as a catalyst for job creation and investment attraction.</p></blockquote>
<p>That does make sense. Investing in quantum may very well bring in extra federal and corporate investment into quantum in Chicago. At the least it will bring in smart people to Illinois to fill research roles. And it's not if this money would go to any other scientific endeavor if we don't put it into quantum.</p>
<p>So it makes sense financially and scientifically even if these machines don't actually solve any real-world problems. Quantum winter will eventually come but might as well take advantage of the hype while it's still there. Or should we?</p>
<p>A physicist colleague strongly supports Illinois spending half a billion on quantum. He lives in Indiana. </p>
<p class="authors">By Lance Fortnow</p>
</body></html>
2024-02-28 15:48:00 UTCComputational ComplexityTight Lower Bounds for Block-Structured Integer Programshttp://arxiv.org/abs/2402.17290v1
http://arxiv.org/abs/2402.17290v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Christoph+Hunkenschr%C3%B6der">Christoph Hunkenschröder</a>, <a href="https://dblp.uni-trier.de/search?q=Kim-Manuel+Klein">Kim-Manuel Klein</a>, <a href="https://dblp.uni-trier.de/search?q=Martin+Kouteck%C3%BD">Martin Koutecký</a>, <a href="https://dblp.uni-trier.de/search?q=Alexandra+Lassota">Alexandra Lassota</a>, <a href="https://dblp.uni-trier.de/search?q=Asaf+Levin">Asaf Levin</a></p>We study fundamental block-structured integer programs called tree-fold and
multi-stage IPs. Tree-fold IPs admit a constraint matrix with independent
blocks linked together by few constraints in a recursive pattern; and
transposing their constraint matrix yields multi-stage IPs. The
state-of-the-art algorithms to solve these IPs have an exponential gap in their
running times, making it natural to ask whether this gap is inherent. We answer
this question affirmative. Assuming the Exponential Time Hypothesis, we prove
lower bounds showing that the exponential difference is necessary, and that the
known algorithms are near optimal. Moreover, we prove unconditional lower
bounds on the norms of the Graver basis, a fundamental building block of all
known algorithms to solve these IPs. This shows that none of the current
approaches can be improved beyond this bound.</body></html>
2024-02-28 01:00:00 UTCarXiv: Computational ComplexityGraph Neural Networks and Arithmetic Circuitshttp://arxiv.org/abs/2402.17805v1
http://arxiv.org/abs/2402.17805v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Timon+Barlag">Timon Barlag</a>, <a href="https://dblp.uni-trier.de/search?q=Vivian+Holzapfel">Vivian Holzapfel</a>, <a href="https://dblp.uni-trier.de/search?q=Laura+Strieker">Laura Strieker</a>, <a href="https://dblp.uni-trier.de/search?q=Jonni+Virtema">Jonni Virtema</a>, <a href="https://dblp.uni-trier.de/search?q=Heribert+Vollmer">Heribert Vollmer</a></p>We characterize the computational power of neural networks that follow the
graph neural network (GNN) architecture, not restricted to aggregate-combine
GNNs or other particular types. We establish an exact correspondence between
the expressivity of GNNs using diverse activation functions and arithmetic
circuits over real numbers. In our results the activation function of the
network becomes a gate type in the circuit. Our result holds for families of
constant depth circuits and networks, both uniformly and non-uniformly, for all
common activation functions.</body></html>
2024-02-28 01:00:00 UTCarXiv: Computational ComplexityGeometric Deep Learning for Computer-Aided Design: A Surveyhttp://arxiv.org/abs/2402.17695v1
http://arxiv.org/abs/2402.17695v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Negar+Heidari">Negar Heidari</a>, <a href="https://dblp.uni-trier.de/search?q=Alexandros+Iosifidis">Alexandros Iosifidis</a></p>Geometric Deep Learning techniques have become a transformative force in the
field of Computer-Aided Design (CAD), and have the potential to revolutionize
how designers and engineers approach and enhance the design process. By
harnessing the power of machine learning-based methods, CAD designers can
optimize their workflows, save time and effort while making better informed
decisions, and create designs that are both innovative and practical. The
ability to process the CAD designs represented by geometric data and to analyze
their encoded features enables the identification of similarities among diverse
CAD models, the proposition of alternative designs and enhancements, and even
the generation of novel design alternatives. This survey offers a comprehensive
overview of learning-based methods in computer-aided design across various
categories, including similarity analysis and retrieval, 2D and 3D CAD model
synthesis, and CAD generation from point clouds. Additionally, it provides a
complete list of benchmark datasets and their characteristics, along with
open-source codes that have propelled research in this domain. The final
discussion delves into the challenges prevalent in this field, followed by
potential future research directions in this rapidly evolving field.</body></html>
2024-02-28 01:00:00 UTCarXiv: Computational GeometryEnclosing Points with Geometric Objectshttp://arxiv.org/abs/2402.17322v1
http://arxiv.org/abs/2402.17322v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Timothy+M.+Chan">Timothy M. Chan</a>, <a href="https://dblp.uni-trier.de/search?q=Qizheng+He">Qizheng He</a>, <a href="https://dblp.uni-trier.de/search?q=Jie+Xue">Jie Xue</a></p>Let $X$ be a set of points in $\mathbb{R}^2$ and $\mathcal{O}$ be a set of
geometric objects in $\mathbb{R}^2$, where $|X| + |\mathcal{O}| = n$. We study
the problem of computing a minimum subset $\mathcal{O}^* \subseteq \mathcal{O}$
that encloses all points in $X$. Here a point $x \in X$ is enclosed by
$\mathcal{O}^*$ if it lies in a bounded connected component of $\mathbb{R}^2
\backslash (\bigcup_{O \in \mathcal{O}^*} O)$. We propose two algorithmic
frameworks to design polynomial-time approximation algorithms for the problem.
The first framework is based on sparsification and min-cut, which results in
$O(1)$-approximation algorithms for unit disks, unit squares, etc. The second
framework is based on LP rounding, which results in an $O(\alpha(n)\log
n)$-approximation algorithm for segments, where $\alpha(n)$ is the inverse
Ackermann function, and an $O(\log n)$-approximation algorithm for disks.</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and AlgorithmsRobustly Learning Single-Index Models via Alignment Sharpnesshttp://arxiv.org/abs/2402.17756v1
http://arxiv.org/abs/2402.17756v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Nikos+Zarifis">Nikos Zarifis</a>, <a href="https://dblp.uni-trier.de/search?q=Puqian+Wang">Puqian Wang</a>, <a href="https://dblp.uni-trier.de/search?q=Ilias+Diakonikolas">Ilias Diakonikolas</a>, <a href="https://dblp.uni-trier.de/search?q=Jelena+Diakonikolas">Jelena Diakonikolas</a></p>We study the problem of learning Single-Index Models under the $L_2^2$ loss
in the agnostic model. We give an efficient learning algorithm, achieving a
constant factor approximation to the optimal loss, that succeeds under a range
of distributions (including log-concave distributions) and a broad class of
monotone and Lipschitz link functions. This is the first efficient constant
factor approximate agnostic learner, even for Gaussian data and for any
nontrivial class of link functions. Prior work for the case of unknown link
function either works in the realizable setting or does not attain constant
factor approximation. The main technical ingredient enabling our algorithm and
analysis is a novel notion of a local error bound in optimization that we term
alignment sharpness and that may be of broader interest.</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and AlgorithmsLearning-Based Algorithms for Graph Searching Problemshttp://arxiv.org/abs/2402.17736v1
http://arxiv.org/abs/2402.17736v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Adela+Frances+DePavia">Adela Frances DePavia</a>, <a href="https://dblp.uni-trier.de/search?q=Erasmo+Tani">Erasmo Tani</a>, <a href="https://dblp.uni-trier.de/search?q=Ali+Vakilian">Ali Vakilian</a></p>We consider the problem of graph searching with prediction recently
introduced by Banerjee et al. (2022). In this problem, an agent, starting at
some vertex $r$ has to traverse a (potentially unknown) graph $G$ to find a
hidden goal node $g$ while minimizing the total distance travelled. We study a
setting in which at any node $v$, the agent receives a noisy estimate of the
distance from $v$ to $g$. We design algorithms for this search task on unknown
graphs. We establish the first formal guarantees on unknown weighted graphs and
provide lower bounds showing that the algorithms we propose have optimal or
nearly-optimal dependence on the prediction error. Further, we perform
numerical experiments demonstrating that in addition to being robust to
adversarial error, our algorithms perform well in typical instances in which
the error is stochastic. Finally, we provide alternative simpler performance
bounds on the algorithms of Banerjee et al. (2022) for the case of searching on
a known graph, and establish new lower bounds for this setting.</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and AlgorithmsThe SMART approach to instance-optimal online learninghttp://arxiv.org/abs/2402.17720v1
http://arxiv.org/abs/2402.17720v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Siddhartha+Banerjee">Siddhartha Banerjee</a>, <a href="https://dblp.uni-trier.de/search?q=Alankrita+Bhatt">Alankrita Bhatt</a>, <a href="https://dblp.uni-trier.de/search?q=Christina+Lee+Yu">Christina Lee Yu</a></p>We devise an online learning algorithm -- titled Switching via Monotone
Adapted Regret Traces (SMART) -- that adapts to the data and achieves regret
that is instance optimal, i.e., simultaneously competitive on every input
sequence compared to the performance of the follow-the-leader (FTL) policy and
the worst case guarantee of any other input policy. We show that the regret of
the SMART policy on any input sequence is within a multiplicative factor
$e/(e-1) \approx 1.58$ of the smaller of: 1) the regret obtained by FTL on the
sequence, and 2) the upper bound on regret guaranteed by the given worst-case
policy. This implies a strictly stronger guarantee than typical
`best-of-both-worlds' bounds as the guarantee holds for every input sequence
regardless of how it is generated. SMART is simple to implement as it begins by
playing FTL and switches at most once during the time horizon to the worst-case
algorithm. Our approach and results follow from an operational reduction of
instance optimal online learning to competitive analysis for the ski-rental
problem. We complement our competitive ratio upper bounds with a fundamental
lower bound showing that over all input sequences, no algorithm can get better
than a $1.43$-fraction of the minimum regret achieved by FTL and the
minimax-optimal policy. We also present a modification of SMART that combines
FTL with a ``small-loss" algorithm to achieve instance optimality between the
regret of FTL and the small loss regret bound.</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and AlgorithmsDeterministic Cache-Oblivious Funnelselecthttp://arxiv.org/abs/2402.17631v1
http://arxiv.org/abs/2402.17631v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Gerth+St%C3%B8lting+Brodal">Gerth Stølting Brodal</a>, <a href="https://dblp.uni-trier.de/search?q=Sebastian+Wild">Sebastian Wild</a></p>In the multiple-selection problem one is given an unsorted array $S$ of $N$
elements and an array of $q$ query ranks $r_1<\cdots<r_q and the task is to return in sorted order elements of rank r_q respectively. asymptotic deterministic comparison complexity problem was settled by dobkin munro i model an optimal achieved hu et al. recently we presented a cache-oblivious algorithm with matching named funnelselect since it heavily borrows ideas from sorting funnelsort seminal paper frigo leiserson prokop ramachandran inherently randomized as relies on sampling for cheaply finding many good pivots. this present achieving same optional cache-obliviously without randomization. our new essentially replaces single expectation reversed-funnel computation using random pivots recursive multiple computations. meet bound requires carefully chosen subproblem size based entropy sequence query ranks thus raises distinct technical challenges not met funnelselect. resulting worst-case where external memory block b internal some constant>0$, and $\Delta_i = r_{i} - r_{i-1}$ (assuming $r_0=0$ and
$r_{q+1}=N + 1$).</r_q>
</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and AlgorithmsFlipHash: A Constant-Time Consistent Range-Hashing Algorithmhttp://arxiv.org/abs/2402.17549v1
http://arxiv.org/abs/2402.17549v1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<p class="arxiv-authors"><b>Authors:</b> <a href="https://dblp.uni-trier.de/search?q=Charles+Masson">Charles Masson</a>, <a href="https://dblp.uni-trier.de/search?q=Homin+K.+Lee">Homin K. Lee</a></p>Consistent range-hashing is a technique used in distributed systems, either
directly or as a subroutine for consistent hashing, commonly to realize an even
and stable data distribution over a variable number of resources. We introduce
FlipHash, a consistent range-hashing algorithm with constant time complexity
and low memory requirements. Like Jump Consistent Hash, FlipHash is intended
for applications where resources can be indexed sequentially. Under this
condition, it ensures that keys are hashed evenly across resources and that
changing the number of resources only causes keys to be remapped from a removed
resource or to an added one, but never shuffled across persisted ones. FlipHash
differentiates itself with its low computational cost, achieving constant-time
complexity. We show that FlipHash beats Jump Consistent Hash's cost, which is
logarithmic in the number of resources, both theoretically and in experiments
over practical settings.</body></html>
2024-02-28 01:00:00 UTCarXiv: Data Structures and Algorithms