Archiwum seminariów

07.12.21630
Michał Seweryn
Université Libre de Bruxelles
Informatyka Teoretyczna
Recent results in graph product structure theory

Graph product structure theory describes complicated graphs as subgraphs of strong products of simpler building blocks. Usually, the strong product involves three graphs: a graph of bounded treewidth, a small complete graph, and a path. For example, Dujmović et al. showed that every planar graph is a subgraph of the strong product of a treewidth-3 graph, a complete graph on three vertices, and a path. This theorem has been the key to solving many long-standing problems about planar graphs, and is arguably the most important result of the graph product structure theory.

In my talk I will discuss some of the recent results in this field. First I will discuss two generalizations of the product structure theorem for planar graphs to k-planar graphs and k-powers of planar graphs with bounded degree. The distinguishing property of these theorems is that the bound on the treewidth in the product is an absolute constant independent of k and the maximum degree. Then, I will discuss some product structure theorems, where an n-vertex graph is a subgraph of the strong product of two graphs: a graph of constant treewidth, and a complete graph on O(n) vertices. These theorems are qualitative strengthenings of √n-separator theorems by Lipton-Tarjan and Alon-Seymour-Thomas.  

Joint works with Marc Distel, Vida Dujmović, David Eppstein, Robert Hickingbotham, Gwenaël Joret, Piotr Micek, Pat Morin, and David Wood

13.08.87340
Ola Svensson
École Polytechnique Fédérale de Lausanne
Informatyka Teoretyczna
The Price of Explainability for Clustering
Given a set of points in d-dimensional space, an explainable clustering is one where the clusters are specified by a tree of axis-aligned threshold cuts. Dasgupta et al. (ICML 2020) posed the question of the price of explainability: the worst-case ratio between the cost of the best explainable clusterings to that of the best clusterings.
 
We show that the price of explainability for k-medians is at most 1+Hk−1; in fact, we show that the popular Random Thresholds algorithm has exactly this price of explainability, matching the known lower bound constructions. We complement our tight analysis of this particular algorithm by constructing instances where the price of explainability (using any algorithm) is at least (1−o(1))·ln k, showing that our result is best possible, up to lower-order terms. We also improve the price of explainability for the k-means problem to O(k·lnln k) from the previous O(k·ln k), considerably closing the gap to the lower bounds of Ω(k).
 
Finally, we study the algorithmic question of finding the best explainable clustering: We show that explainable k-medians and k-means cannot be approximated better than O(ln k), under standard complexity-theoretic conjectures. This essentially settles the approximability of explainable k-medians and leaves open the intriguing possibility to get significantly better approximation algorithms for k-means than its price of explainability.
 
This is joint work with Anupam Gupta, Madhusudhan Reddy Pittu, and Rachel Yuan
20.04.35320
Ruta Mehta
University of Illinois at Urbana-Champaign
Informatyka Teoretyczna
Competitive division of goods, bads, and mixed: existence, computation, and complexity

Fair division is the problem of allocating a set of items among agents in a fair and efficient manner. This age-old problem, mentioned even in the Bible, arises naturally in a wide range of real-life settings, for example, school seat assignments, partnership dissolution, sharing of satellites, and dividing costs for climate resilience. Division based on competitive equilibrium (CE) has emerged as one of the best mechanisms for this problem. The existence and computability of CE have been extensively studied when all items are disposable goods, while the problem is less explored when some of them are non-disposable chores (bads). In this talk, I will discuss recent algorithmic advances on the computation of CE when each item may be a good, a chore, or both (mixed).

I will first consider the case of additive valuations, where when all items are goods, the CE set is well-known to be captured by convex programming formulations and thereby forms a convex set. In sharp contrast, with chores, the CE set may be nonconvex and disconnected. I will discuss how to handle this non-convexity through a new exterior-point method to find an approximate CE in polynomial time (FPTAS). This method seems general enough to work with any mathematical formulation that optimizes a coordinate-wise monotone function over linear constraints. Finally, I will discuss recent developments on the exchange setting (barter system) on existence and computational complexity.

Based on joint works with Shant Boodaghians, Bhaskar Ray Chaudhury, Jugal Garg, and Peter McGlaughlin.

29.03.5203
Mikkel Thorup
University of Copenhagen
Informatyka Teoretyczna
Reconstructing the Tree of Life (Fitting Distances by Tree Metrics)

We consider the numerical taxonomy problem of fitting an SxS distance matrix D with a tree metric T. Here T is a weighted tree spanning S where the path lengths in T induce a metric on S. If there is a tree metric matching D exactly, then it is easily found. If there is no exact match, then for some k, we want to minimize the Lk norm of the errors, that is, pick T so as to minimize ||D-T||k = (Σi,jϵS |D(i,j)-T(i,j)|k) 1/k.

An evolutionary tree induces a hierarchical classification of species and this is not just tied to biology. Medicine, ecology and linguistics are just some of the fields where this concept appears, and it is even an integral part of machine learning and data science. Fundamentally, if we can approximate distances with a tree, then they are much easier to reason about: many questions that are NP-hard for general metrics can be answered in linear time on tree metrics. In fact, humans have appreciated hierarchical classifications at least since Plato and Aristotle (350 BC).

The numerical taxonomy problem is important in practice and many heuristics have been proposed. In this talk we will review the basic algorithmic theory, results and techniques, for the problem, including the most recent result from FOCS'21 [Vincent Cohen-Addad et al., 2021]. They paint a varied landscape with big differences between different moments, and with some very nice open problems remaining.

- At STOC'93, Farach, Kannan, and Warnow [Martin Farach et al., 1995] proved that under L, we can find the optimal ultrametric. Almost all other variants of the problem are APX-hard

- At SODA'96, Agarwala, Bafna, Farach, Paterson, and Thorup [Richa Agarwala et al., 1999] showed that for any norm Lk, k1, if the best ultrametric can be α-approximated, then the best tree metric can be 3α-approximated. In particular, this implied a 3-approximation for tree metrics under L∞.

- At FOCS'05, Ailon and Charikar [Nir Ailon and Moses Charikar, 2011] showed that for any Lk, k1, we can get an approximation factor of O(((log n)(log log n))1/k) for both tree and ultrametrics. Their paper was focused on the L1 norm, and they wrote "Determining whether an O(1) approximation can be obtained is a fascinating question".

- At FOCS'21, Cohen-Addad, Das, Kipouridis, Parotsidis, and Thorup [Vincent Cohen-Addad et al., 2021] showed that indeed a constant factor is possible for L1 for both tree and ultrametrics. This uses the special structure of L1 in relation to hierarchies.

- The status of Lk is wide open for 1<k<∞. All we know is that the approximation factor is between Ω(1) and O((log n)(log log n)).

13.11.51637
Roman Madej
Podstawy Informatyki
Modular Construction of Fixed Point Combinators and Clocked Bohm Trees by Jorg Endrullis, Dimitri Hendriks and Jan Willem Klop

Fixed point combinators (and their generalization: looping combinators) are classic notions belonging to the heart of λ-calculus and logic. We start with an exploration of the structure of fixed point combinators (fpc’s), vastly generalizing the well-known fact that if Y is an fpc, Y (SI) is again an fpc, generating the B ̈ohm sequence of fpc’s. Using the infinitary λ-calculus we devise infinitely many other generation schemes for fpc’s. In this way we find schemes and building blocks to construct new fpc’s in a modular way. Having created a plethora of new fixed point combinators, the task is to prove that they are indeed new. That is, we have to prove their β-inconvertibility. Known techniques via B ̈ohm Trees do not apply, because all fpc’s have the same Bohm Tree (BT). Therefore, we employ ‘clocked BT’s’, with annotations that convey information of the tempo in which the data in the BT are produced. BT’s are thus enriched with an intrinsic clock behaviour, leading to a refined discrimination method for λ-terms. The corresponding equality is strictly intermediate between =β and =BT, the equality in the classical models of λ-calculus. An analogous approach pertains to L ́evy–Longo and Berarducci trees. Finally, we increase the discrimination power by a precision of the clock notion that we call ‘atomic clock’.


The theory of sage birds (technically called fixed point combinators) is a fascinating and basic part of combinatory logic; we have only scratched the surface.

08.07.32472
Rafał Loska
Podstawy Informatyki
Strict monotonic trees arising from evolutionary processes: combinatorial and probabilistic study by Olivier Bodini, Antoine Genitrini, Cécile Mailler and Mehdi Naima
In this paper we study two models of labelled random trees that generalise the original unlabelled Schroder tree. Our new models can be seen as models for phylogenetic trees in which nodes represent species and labels encode the order of appearance of these species, and thus the chronology of evolution. One important feature of our trees is that they can be generated
efficiently thanks to a dynamical, recursive construction. Our first model is an increasing tree in the classical sense (labels increase along each branch of the tree and each label appears only once). To better model phylogenetic trees, we relax the rules of labelling by allowing repetitions in the second model. For each of the two models, we provide asymptotic theorems for different characteristics of the tree (e.g. degree of the root, degree distribution, height, etc), thus giving extensive information about the typical shapes of these trees. We also provide efficient algorithms to generate large trees efficiently in the two models. The proofs are based on a combination of analytic combinatorics, probabilistic methods, and bijective methods (we exhibit bijections between our models and well-known models of the literature such as permutations and Stirling numbers of both kinds). It turns out that even though our models are labelled, they can be specified simply in the world of ordinary generating functions. However, the resulting generating functions will be formal. Then, by applying Borel transforms the models will be amenable to techniques of analytic combinatorics.
04.03.13307
Sebastain Spyrzewski
Podstawy Informatyki
A characterization of lambda-terms transforming numerals by PAWEŁ PARYS
It is well known that simply typed λ-terms can be used to represent numbers, as well as some other data types. We show that λ-terms of each fixed (but possibly very complicated) type can be described by a finite piece of information (a set of appropriately defined intersection types) and by a vector of natural numbers. On the one hand, the description is compositional:
having only the finite piece of information for two closed λ-terms M and N, we can determine its counterpart for M N, and a linear transformation that applied to the vectors of numbers for M and N gives us the vector for M N. On the other hand, when a λ-term represents a natural number, then this number is approximated by a number in the vector corresponding to this λ-term. As a consequence, we prove that in a λ-term of a fixed type, we can store only a fixed number of natural numbers, in such a way that they can be extracted using λ-terms. More precisely, while representing k numbers in a closed λ-term of some type, we
only require that there are k closed λ-terms M1, . . . , M k such that M i takes as argument the λ-term representing the k-tuple, and returns the i-th number in the tuple (we do not require that, using λ-calculus, one can construct the representation of the k-tuple out of the k numbers in the tuple). Moreover, the same result holds when we allow that the numbers can be extracted approximately, up to some error (even when we only want to know whether a set is bounded or not). All the results remain true when we allow the Y combinator (recursion) in our λ-terms, as well as uninterpreted constants.
20.04.76384
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
On a Problem of Steinhaus

Let N be a positive integer. A sequence X=(x1,x2,…,xN) of points in the unit interval [0,1) is piercing if {x1,x2,…,xn}∩[i/n,(i+1)/n)≠∅ holds for every n=1,2,…,N and every i=0,1,…,n−1. In 1958 Steinhaus asked whether piercing sequences can be arbitrarily long. A negative answer was provided by Schinzel, who proved that any such sequence may have at most 74 elements. This was later improved to the best possible value of 17 by Warmus, and independently by Berlekamp and Graham. We study a more general variant of piercing sequences. Let f(n)≥n be an infinite nondecreasing sequence of positive integers. A sequence X=(x1,x2,…,xf(N)) is f-piercing if {x1,x2,…,xf(n)}∩[i/n,(i+1)/n)≠∅ holds for every n=1,2,…,N and every i=0,1,…,n−1. A special case of f(n)=n+d, with d a fixed nonnegative integer, was studied by Berlekamp and Graham. They noticed that for each d≥0, the maximum length of any (n+d)-piercing sequence is finite. Expressing this maximum length as s(d)+d, they obtained an exponential upper bound on the function s(d), which was later improved to s(d)=O(d3) by Graham and Levy. Recently, Konyagin proved that 2d⩽s(d)<200d holds for all sufficiently big d. Using a different technique based on the Farey fractions and stick-breaking games, we prove here that the function s(d) satisfies ⌊c1d⌋⩽s(d)⩽c2d+o(d), where c1=ln2/(1−ln2)≈2.25 and c2=(1+ln2)/(1−ln2)≈5.52. We also prove that there exists an infinite f-piercing sequence with f(n)=γn+o(n) if and only if γ≥1/ln2≈1.44. This is joint work with Marcin Anholcer, Jarosław Grytczuk, Grzegorz Gutowski, Jakub Przybyło, Rafał Pyzik, and Mariusz Zając.

  1. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Grzegorz Gutowski, Jakub Przybyło, Rafał Pyzik, Mariusz Zając. On a Problem of Steinhaus. arXiv:2111.01887. (2021).
08.08.38053
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Recoloring Unit Interval Graphs with Logarithmic Recourse Budget

We study the problem of coloring a unit interval graph that changes dynamically. In our model the unit intervals are added or removed one at a time and have to be colored immediately so that no two overlapping intervals share the same color. After each update, only a limited number of intervals are allowed to be recolored. The limit on the number of recolorings per update is called the recourse budget. In this paper, we show, that if the graph remains k-colorable at all times, and the updates consist of insertions only, then we can achieve the amortized recourse budget of O(k7logn) while maintaining a proper coloring with k colors. This is an exponential improvement over the result in [Bosek et al., Recoloring Interval Graphs with Limited Recourse Budget. SWAT 2020] in terms of both k and n. We complement this result by showing the lower bound of Ω(n) on the amortized recourse budget in the fully dynamic setting. Our incremental algorithm can be efficiently implemented. As a byproduct of independent interest, we include a new result on coloring proper circular-arc graphs. Let L be the maximum number of arcs intersecting in one point for some set of unit circular arcs A. We show that if there is a set A′ of non-intersecting unit arcs of size L2−1 such that A∪A′ does not contain L+1 arcs intersecting in one point, then it is possible to color A with L colors. This complements the work on unit circular arc coloring, which specifies sufficient conditions needed to color A with L+1 colors or more. This is joint work with Anna Zych-Pawlewicz.

  1. Bartłomiej Bosek, Anna Zych-Pawlewicz. Recoloring Unit Interval Graphs with Logarithmic Recourse Budget. arXiv:2202.08006. (2022).
05.03.70912
Szymon Toruńczyk
University of Warsaw
Informatyka Teoretyczna
Ordered graphs of bounded twin-width and monadically NIP graph classes

We establish a list of characterizations of bounded twin-width for hereditary classes of totally ordered graphs: as classes of at most exponential growth studied in enumerative combinatorics, as monadically NIP classes studied in model theory, as classes that do not transduce the class of all graphs studied in finite model theory, and as classes for which model checking first-order logic is fixed-parameter tractable studied in algorithmic graph theory.

This has several consequences.First, it allows us to show that every hereditary class of ordered graphs either has at most exponential growth, or has at least factorial growth. This settles a question first asked by Balogh, Bollobás, and Morris [Eur. J. Comb. '06] on the growth of hereditary classes of ordered graphs, generalizing the Stanley-Wilf conjecture/Marcus-Tardos theorem. Second, it gives a fixed-parameter approximation algorithm for twin-width on ordered graphs. Third, it yields a full classification of fixed-parameter tractable first-order model checking on hereditary classes of ordered binary structures. Fourth, it provides a model-theoretic characterization of classes with bounded twin-width.

Those results are joint work with Bonnet, Giocanti, Ossona de Mendez, Simon, Thomasse, accepted to STOC'22.

Time permitting, I will also discuss the more general landscape of monadically NIP graph classes, generalizing both nowhere dense classes and classes of bounded twin-width.

22.12.54429
Andrii Orap, Maksym Zub

Grundy Distinguishes Treewidth from Pathwidth

Strukturalne parametry grafów, takie jak treewidth, pathwidth i clique-width, są głównym tematem badań sparametryzowanej złożoności. Głównym celem badań w tej dziedzinie jest zrozumienie „ceny ogólności” tych szerokości: kiedy przechodzimy od pojęć bardziej restrykcyjnych do bardziej ogólnych, jakie są problemy, w których złożoność pogarsza się z fixed-parameter tractable do intractable? Ten rodzaj pytania jest już bardzo dobrze zbadany, ale, co jest dość uderzające, algorytmiczna granica między dwoma (prawdopodobnie) najbardziej centralnymi pojęciami szerokości (notacjami), treewidth i pathwidth, jest nadal niezrozumiała: obecnie nie jest znany żaden naturalny problem na grafie, który byłby W-trudny dla jednego, ale FPT dla drugiego. Rzeczywiście, zaskakującym rozwojem ostatnich kilku lat była obserwacja, że: w przypadku wielu najbardziej paradygmatycznych problemów ich złożoność dla tych dwóch parametrów w rzeczywistości dokładnie się pokrywają, pomimo faktu, że szerokość drzewa jest parametrem o wiele bardziej ogólnym. W ten sposób wydaje się, że dodatkowa ogólność szerokości drzewa nad szerokością ścieżki często przychodzi „za darmo”.

Naszym głównym wkładem w ten artykuł jest odkrycie pierwszego naturalnego przykładu, w którym ta ogólność ma wysoką cenę. Rozważamy Grundy Coloring, wariację kolorowania, w której próbujemy obliczyć najgorsze możliwe kolorowanie, które można przypisać do grafu przez zachłanny algorytm First-Fit . Pokazujemy, że ten dobrze zbadany problem jest parametryzowany (FPT) przez pathwidth; jednakże to staje się znacznie trudniejsze (W[1]-hard), gdy jest sparametryzowany przez treewidth. Ponadto pokazujemy, że Grundy Coloring sprawia, że jest drugi skok złożoności dla bardziej ogólnych szerokości, gdy staje się para-NP-hard dla clique-width. Dlatego Grundy Coloring ładnie oddaje złożoność kompromisów między trzema najlepiej zbadanymi parametrami. Uzupełniając obraz, pokazujemy, że Grundy Coloring jest parametryzowane przez FPT według modular-width.

23.04.51637
Jan Koscisz
Podstawy Informatyki
Functions-as-constructors higher-order unification: extended pattern unification by Tomer Libal and Dale Miller
Unification is a central operation in constructing a range of computational logic systems based on first-order and higher-order logics. First-order unification has several properties that guide its incorporation in such systems. In particular, first-order unification is decidable, unary, and can be performed on untyped term structures. None of these three properties hold for full higher-order unification: unification is undecidable, unifiers can be incomparable, and term-level typing can dominate the search for unifiers. The so-called pattern subset of higher-order unification was designed to be a small extension to first-order unification that respects the laws governing λ-binding (i.e., the equalities for α, β, and η-conversion) but which also satisfied those three properties. While the pattern fragment of higher-order unification has been used in numerous implemented systems and in various theoretical settings, it is too weak for many applications. This paper defines an extension of pattern unification that should make it more generally applicable, especially in proof assistants that allow for higher-order functions. This extension’s main idea is that the arguments to a higher-order, free variable can be more than just distinct bound variables. In particular, such arguments can be terms constructed from (sufficient numbers of) such bound variables using term constructors and where no argument is a subterm of any other argument. We show that this extension to pattern unification satisfies the three properties mentioned above.
28.10.32471
Roch Wójtowicz
Podstawy Informatyki
SHARP THRESHOLDS OF GRAPH PROPERTIES, AND THE k-SAT PROBLEM by EHUD FRIEDGUT AND AN APPENDIX BY JEAN BOURGAIN

Consider G(n, p) to be the probability space of random graphs on n vertices with edge probability p. We will be considering subsets of this space defined by monotone graph properties. A monotone graph property P is a property of graphs such that


a) P is invariant under graph automorphisims.
b) If graph H has property P , then so does any graph G having H as a subgraph.
 

A monotone symmetric family of graphs is a family defined by such a property. One of the first observations made about random graphs by Erdos and Renyi in their seminal work on random graph theory [12] was the existence of threshold phenomena, the fact that for many interesting properties P , the probability of P appearing in G(n, p) exhibits a sharp increase at a certain critical value of the parameter p. Bollob ́as and Thomason proved the existence of threshold functions for all monotone set properties ([6]), and in [14] it is shown that this behavior is quite general, and that all monotone graph properties exhibit threshold behavior, i.e. the probability of their appearance increases from values very close to 0 to values close to 1 in a very small interval. More precise analysis of the size of the threshold interval is done in [7]. This threshold behavior which occurs in various settings which arise in combinatorics and computer science is an instance of the phenomenon of phase transitions which is the subject of much interest in statistical physics. One of the main questions that arises in studying phase transitions is: how “sharp” is the transition? For example, one of the motivations for this paper arose from the question of the sharpness of the phase transition for the property of satisfiability of a random k-CNF Boolean formula. Nati Linial, who introduced me to this problem, suggested that although much concrete analysis was being performed on this problem the best approach would be to find general conditions for sharpness of the phase transition, answering the question posed in [14] as to the relation between the length of the threshold interval and the value of the critical probability.

23.05.76278
Roch Wójtowicz
Podstawy Informatyki
SEMINARIUM i WYSTĄPIENIE ROCHA WÓJTOWICZA PRZENIESINE NA 11.05.2022

Consider G(n, p) to be the probability space of random graphs on n vertices with edge probability p. We will be considering subsets of this space defined by monotone graph properties. A monotone graph property P is a property of graphs such that


a) P is invariant under graph automorphisims.
b) If graph H has property P , then so does any graph G having H as a subgraph.
 

A monotone symmetric family of graphs is a family defined by such a property. One of the first observations made about random graphs by Erdos and Renyi in their seminal work on random graph theory [12] was the existence of threshold phenomena, the fact that for many interesting properties P , the probability of P appearing in G(n, p) exhibits a sharp increase at a certain critical value of the parameter p. Bollob ́as and Thomason proved the existence of threshold functions for all monotone set properties ([6]), and in [14] it is shown that this behavior is quite general, and that all monotone graph properties exhibit threshold behavior, i.e. the probability of their appearance increases from values very close to 0 to values close to 1 in a very small interval. More precise analysis of the size of the threshold interval is done in [7]. This threshold behavior which occurs in various settings which arise in combinatorics and computer science is an instance of the phenomenon of phase transitions which is the subject of much interest in statistical physics. One of the main questions that arises in studying phase transitions is: how “sharp” is the transition? For example, one of the motivations for this paper arose from the question of the sharpness of the phase transition for the property of satisfiability of a random k-CNF Boolean formula. Nati Linial, who introduced me to this problem, suggested that although much concrete analysis was being performed on this problem the best approach would be to find general conditions for sharpness of the phase transition, answering the question posed in [14] as to the relation between the length of the threshold interval and the value of the critical probability.

18.07.84601
Raphael Steiner
ETH Zürich
Informatyka Teoretyczna
New bounds for relatives of Hadwiger's conjecture

In this talk, I will present some recent results on two variants of Hadwiger's conjecture.

             First, I will discuss the so-called Odd Hadwiger's conjecture, a strengthening of Hadwiger's conjecture proposed by Gerards and Seymour in 1995. First I will show how, using a simple new trick, one can reduce the problem of coloring graphs with no odd Kt-minor to coloring graphs with no Kt-minor up to a constant factor of 2, thereby improving previous upper bounds for this problem. Then, I will outline how a similar idea can be used to significantly improve the known bounds for clustered colorings of odd Kt-minor free graphs, in which we look for (possibly improper) colorings with monochromatic components of small size.

            Second,  I will discuss the so-called List Hadwiger's conjecture,  which states that there exists a constant c such that every graph with no Kt-minor is ct-choosable (i.e.,  list colorable).   I will show a probabilistic construction of a new lower bound  2t-o(t)  for list coloring  Kt-minor free graphs,  this refutes a conjecture by Kawarabayashi and Mohar which stated that one can take c=3/2.  I will then show how some well-chosen modifications to our construction can be used to prove lower bounds also for list coloring  H-minor free graphs where  H  is non-complete. In particular, I will show that   Ks,t-minor free graphs  for large comparable  s  and  t  can have list chromatic number  at least  (1-o(1))(s+t+min(s,t)),  this refutes a 2001 conjecture by Woodall.

12.03.65436
Mathieu Mari
University of Warsaw and IDEAS-NCBR
Informatyka Teoretyczna
A (2+ε)-Approximation Algorithm for Maximum Independent Set of Rectangles

We study the Maximum Independent Set of Rectangles (MISR) problem, where we are given a set of axis-parallel rectangles in the plane and the goal is to select a subset of non-overlapping rectangles of maximum cardinality. In a recent breakthrough, Mitchell [2021] obtained the first constant-factor approximation algorithm for MISR. His algorithm achieves an approximation ratio of 10 and it is based on a dynamic program that intuitively recursively partitions the input plane into special polygons called corner-clipped rectangles (CCRs), without intersecting certain special horizontal line segments called fences.

In this talk, I will present a (2+ϵ)-approximation algorithm for MISR which is also based on a recursive partitioning scheme. First, we use a partition into a class of axis-parallel polygons with constant complexity each that are more general than CCRs. This allows us to provide an arguably simpler analysis and at the same time already improves the approximation ratio to 4. Then, using a more elaborate charging scheme and a recursive partitioning into general axis-parallel polygons with constant complexity, we improve our approximation ratio to 2+ϵ. In particular, we construct a recursive partitioning based on more general fences which can be sequences of up to O(1/ϵ) line segments each. This partitioning routine and our other new ideas may be useful for future work towards a PTAS for MISR.

At the end of the talk, I will present a bunch of open questions related to the problem.

This is a joint work with Waldo Gálvez, Arindam Khan, Tobias Mömke, Madhusudhan Reddy and Andreas Wiese
29.12.48953
Ignacy Buczek, Michał Woźny

Sorting Balls and Water: Equivalence and Computational Complexity

Problemy sortowania od długiego czasu są obiektem różnego rodzaju badań. Ostatnio dwie gry na telefon w tematyce sortowania zyskały na popularności. W tych grach, gracz ma do dyspozycji urny wypełnione kolorowymi obiektami (w przypadku jednej są to kule, a w przypadku drugiej woda) oraz kilka pustych urn, a jego celem jest posortowanie obiektów zgodnie z kolorami. W jednym ruchu może on przenieść obiekty z jednej urny do drugiej, jeżeli kolor przenoszonych obiektów zgadza się z kolorem najwyższego obiektu docelowej urny lub urna ta jest pusta.

W pracy autorzy badają złożoność obliczeniową tych łamigłówek. Na początku pokazują, że te gry są w równoważne pod kątem rozwiązywalności. Dokładniej mówiąc, rozwiązywalność stanu początkowego gry nie zależy od tego czy obiekty zachowują się jak kule, czy jak woda. Dowodzą również, że dla każdej tak-instancji istnieje rozwiązanie wielomianowego rozmiaru, co pokazuje, że problem rozwiązywalności tych łamigłówek jest w NP. Następnie uzasadniają, że ten problem jest NP-zupełny. Znajdują również wielomianowe algorytmy dla szczególnych przypadków. Na samym końcu zastanawiają się, jak wiele pustych urn jest potrzebnych, aby instancja była rozwiązywalna niezależnie od początkowego rozmieszczenia obiektów. Pokazują nietrywialne ograniczenia (dolne i górne) zależne od ilości początkowo zapełnionych urn i ich pojemności.

24.02.7940
Jarosław Byrka
University of Wrocław
Informatyka Teoretyczna
Online Facility Location with Linear Delay

We study the problem of online facility location with delay. In this problem, a sequence of n clients appear in the metric space, and they need to be eventually connected to some open facility. The clients do not have to be connected immediately, but such a choice comes with a penalty: each client incurs a waiting cost (the difference between its arrival and connection time). At any point in time, an algorithm may decide to open a facility and connect any subset of clients to it. This is a well-studied problem both of its own, and within the general class of network design problems with delays.
Our main focus is on a new variant of this problem, where clients may be connected also to an already open facility, but such action incurs an extra cost: an algorithm pays for waiting of the facility (a cost incurred separately for each such "late" connection). This is reminiscent of online matching with delays, where both sides of the connection incur a waiting cost. We call this variant two-sided delay to differentiate it from the previously studied one-sided delay.
We present an O(1)-competitive deterministic algorithm for the two-sided delay variant. On the technical side, we study a greedy strategy, which grows budgets with increasing waiting delays and opens facilities for subsets of clients once sums of these budgets reach certain thresholds. Our technique is a substantial extension of the approach used by Jain, Mahdian and Saberi [STOC 2002] for analyzing the performance of offline algorithms for facility location.
We then show how to transform our O(1)-competitive algorithm for the two-sided delay variant to O(logn/loglogn)-competitive deterministic algorithm for one-sided delays. We note that all previous online algorithms for problems with delays in general metrics have at least logarithmic ratios.

Joint work with Marcin Bienkowski, Martin Böhm and Jan Marcinkowski

09.10.68173
Jan Derbisz
Informatyka Teoretyczna
Circular-arc graphs and the Helly property

Circular-arc graphs, defined as the intersection graphs of arcs of a fixed circle, are probably one of the simplest classes of intersection graphs, which does not satisfy the Helly property in general (i.e. there are circular-arc graphs in which some cliques can be represented by arcs whose common intersection is empty). In particular, some cliques of a circular-arc G graph may satisfy the Helly property in some but not all circular-arc representations of G.

In the Helly Clique Problem, for a given circular-arc graph G and some of its cliques C1,...,Ck (not necessarily maximal in G) one needs to decide whether there exists a circular-arc representation of G in which all the cliques C1,...,Ck satisfy the Helly property. We know that the Helly Clique Problem is NP-complete and, under the ETH, it can not be solved in time 2o(k)poly(n), where n is the number of vertices of G (Ağaoğlu et al.).

In the talk we will consider the Helly Clique Problem in the context of parameterized complexity, where the natural parameter is the number of cliques given in the input. We will show that the Helly Clique Problem:
- admits an algorithm with running time 2O(k log k)poly(n) (and hence it belongs to FPT),
- admits a polynomial kernel of size O(k6).

Moreover, we will discuss the relation of the Helly Clique Problem with the recognition problems of so-called H-graphs; in particular, in the context of those graphs H for which the recognition problem remains open.

This is joint work with T. Krawczyk. The talk also includes joint work with D. Ağaoğlu, O. Cagrici, T. Hartmann, P. Hliněný, J. Kratochvíl, T. Krawczyk, and P. Zeman.

28.07.51691
Aleksander Katan, Roch Wójtowicz

Filling crosswords is very hard

Autorzy analizują problem wypełniania krzyżówek, który już był rozważany na przykład przez Garey’a i Johnsona w ich książce „Computers and Intractability: A Guide to the Theory of NP-Completeness”. W problemie tym dostajemy m słów i n poziomych lub pionowych slotów (rubryk) oraz jesteśmy proszeni o wypełnienie ich tak, by przecięcia slotów się zgadzały. Autorzy próbują wskazać źródło trudności tej łamigłówki przyglądając się strukturze grafu przecięć slotów. Skupiają się na przypadku, gdy struktura tego grafu przypomina drzewo. Niestety, jeżeli przyjmiemy, że słowa nie mogą być używane wielokrotnie, okazuje się, że problem pozostaje NP-trudny nawet pod bardzo surowymi restrykcjami, jak na przykład, że graf przecięć jest sumą grafów typu star i alfabet ma rozmiar
2, lub że jest matchingiem (krzyżówka składa się z samych rozłącznych krzyży) z alfabetem rozmiaru 3. Zapytanie staje się prostsze, gdy pozwolimy na powtarzanie słów, gdyż autorom udaje się skonstruować algorytm działający w czasie mtw, gdzie tw oznacza treewitdh (szerokość drzewiastą) grafu. Jednak nawet wtedy nie można algorytmu poprawić, by otrzymać FPT. Co więcej, pod założeniem ETH, problem nie może być rozwiązany w czasie mo(k), gdzie k to liczba poziomych slotów instancji problemu.
Zmotywowani tymi przykrymi wynikami, autorzy rozważają bardziej ograniczony przypadek, gdzie parametryzujemy się liczbą slotów wszystkich, a nie tylko poziomych. Problem staje się FPT (o ile alfabet jest stałego rozmiaru), ale złożoność jest wykładnicza od n2. Wykazują, że pod założeniem ETH, nie istnieje algorytm działający w czasie 2o(n^2), nawet dla dwuznakowego alfabetu. Pod koniec pracy rozważana jest wersja optymalizacyjna, w której staramy się wypełnić jak najwięcej pól. Łatwo jest wtedy uzyskać 1⁄2-aproksymację, nawet dla przypadku ważonego, wystarczy po prostu brać pod uwagę tylko pionowe albo tylko poziome sloty. Ten algorytm jest prawdopodobnie optymalny, gdyż istnienie lepszej aproksymacji działającej w wielomianowym czasie przeczyłoby Unique Games Conjecture. Ostatnie dwa wyniki są prawdziwe niezależnie, czy rozpatrujemy przypadek z powtarzaniem słów czy bez.

22.09.10677
Torsten Mütze
University of Warwick & Charles University
Informatyka Teoretyczna
Efficient generation of elimination trees and Hamilton paths on graph associahedra
An elimination tree for a connected graph G is a rooted tree on the vertices of G obtained by choosing a root x and recursing on the connected components of G−x to produce the subtrees of x. Elimination trees appear in many guises in computer science and discrete mathematics, and they are closely related to centered colorings and tree-depth. They also encode many interesting combinatorial objects, such as bitstrings, permutations and binary trees. We apply the recent Hartung-Hoang-Mütze-Williams combinatorial generation framework to elimination trees, and prove that all elimination trees for a chordal graph G can be generated by tree rotations using a simple greedy algorithm (see www.combos.org/elim). This yields a short proof for the existence of Hamilton paths on graph associahedra of chordal graphs. Graph associahedra are a general class of high-dimensional polytopes introduced by Carr, Devadoss, and Postnikov, whose vertices correspond to elimination trees and whose edges correspond to tree rotations. As special cases of our results, we recover several classical Gray codes for bitstrings, permutations and binary trees, and we obtain a new Gray code for partial permutations. Our algorithm for generating all elimination trees for a chordal graph G can be implemented in time O(m+n) per generated elimination tree, where m and n are the number of edges and vertices of G, respectively. If G is a tree, we improve this to a loopless algorithm running in time O(1) per generated elimination tree. We also prove that our algorithm produces a Hamilton cycle on the graph associahedron of G, rather than just Hamilton path, if the graph G is chordal and 2-connected. Moreover, our algorithm characterizes chordality, i.e., it computes a Hamilton path on the graph associahedron of G if and only if G is chordal.
 
This is joint work with Jean Cardinal (ULB) and Arturo Merino (TU Berlin)
16.09.38056
Daniel Kráľ
Masaryk University in Brno
Informatyka Teoretyczna
Uniform Turán density of hypergraphs

In the early 1980s, Erdős and Sós, initiated the study of the classical Turán problem with a uniformity condition: the uniform Turán density of a hypergraph H is the infimum over all d for which any sufficiently large hypergraph with the property that all its linear-size subhyperghraphs have density at least d contains H. In particular, they raise the questions of determining the uniform Turán densities of K43, the complete 4-vertex 3-uniform hypergraph, and K43-, the hypergraph K43 with an edge removed. The latter question was solved only recently in [Israel J. Math. 211 (2016), 349–366] and [J. Eur. Math. Soc. 97 (2018), 77–97], while the former still remains open for almost 40 years.

Prior to our work, the hypergraph K43- was the only 3-uniform hypergraph with non-zero uniform Turán density determined exactly. During the talk, we will present the following two results:

  • We construct a family of 3-uniform hypergraphs with uniform Turán density equal to 1/27. This answers a question of Reiher, Rödl and Schacht [J. London Math. Soc. 97 (2018), 77–97] on the existence of such hypergraphs, stemming from their result that the uniform Turán density of every 3-uniform hypergraph is either 0 or at least 1/27.
  • We show that the uniform Turán density of the tight 3-uniform cycle of length k5 is equal to 4/27 if k is not divisible by three, and equal to zero otherwise. The case k=5 resolves a problem suggested by Reiher [European J. Combin. 88 (2020), 103117].

The talk is based on results obtained jointly with (subsets of) Matija Bucić, Jacob W. Cooper, Frederik Garbe, Ander Lamaison, Samuel Mohr and David Munhá Correia.

Poprzednie referaty

13.01.16153
Louis Esperet
Université Grenoble Alpes
Informatyka Teoretyczna
Universal graphs and labelling schemes

Given a graph class C, a graph G is universal for C if it "contains" all the graphs from C. As there are several notions of containment, there are several notions of universal graphs. In this talk I'll mention two versions:

  • induced-universal graphs: here G contains all the graphs of C as induced subgraphs
  • isometric-universal graphs: here G contains isometric copies of all the graphs from C (isometric means that the distances are preserved in the embedding)

Note that an isometric copy is an induced copy, so the second notion is stronger. These notions are closely related to the notion of labelling schemes in graphs. The goal is to assign labels to the vertices of each graph G from C such that upon reading the labels of any two vertices u and v, we know some properties of u and v in G (whether they are adjacent, or their distance, or whether u is reachable from v if G is a digraph). It turns out that minimizing the size of the labels is equivalent to constructing small universal graphs, at least in the case of induced-universal graphs. For isometric-universal graphs some additional work needs to be done.

I'll survey some recent progress in this area. In particular I'll show how to construct induced-universal graphs of almost optimal size for any hereditary class, using the regularity lemma. In particular this implies almost optimal reachabilty labelling schemes in digraphs and comparability labelling schemes in posets, and the construction of an almost optimal universal poset for the class of all n-element posets (of size 2n/4+o(n)). I will also show how to construct isometric-universal graphs of size 3n+o(n) for the class of all n-vertex graphs, answering a question of Peter Winkler.

Based on joint work with Marthe Bonamy, Cyril Gavoille, Carla Groenland, and Alex Scott.

04.01.70911
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Local Dimension of Planar Poset

In 1981, Kelly showed that planar posets can have arbitrarily large dimension. However, the posets in Kelly's example have bounded Boolean dimension and bounded local dimension, leading naturally to the questions as to whether either Boolean dimension or local dimension is bounded for the class of planar posets. The question for Boolean dimension was first posed by Nešetril and Pudlák in 1989 and remains unanswered today. The concept of local dimension is quite new, introduced in 2016 by Ueckerdt. In just the last year, researchers have obtained many interesting results concerning Boolean dimension and local dimension, contrasting these parameters with the classic Dushnik-Miller concept of dimension, and establishing links between both parameters and structural graph theory, path-width and tree-width in particular. Here we show that local dimension is not bounded on the class of planar posets. Our proof also shows that the local dimension of a poset is not bounded in terms of the maximum local dimension of its blocks, and it provides an alternative proof of the fact that the local dimension of a poset cannot be bounded in terms of the tree-width of its cover graph, independent of its height. This is a joint work with Jarosław Grytczuk and W.T. Trotter.

Bartłomiej Bosek, Jarosław Grytczuk, William T. Trotter. Local Dimension is Unbounded for Planar Posets. arXiv, pages 1-12, 2017.

(the seminar will only be online)

28.05.29842
Bartosz Walczak
Informatyka Teoretyczna
Approximating Pathwidth for Graphs of Small Treewidth

We describe a polynomial-time algorithm which, given a graph G with treewidth t, approximates the pathwidth of G to within a ratio of O(t log t). This is the first algorithm to achieve an f(t)-approximation for some function f.

Our approach builds on the following key insight: every graph with large pathwidth has large treewidth or contains a subdivision of a large complete binary tree. Specifically, we show that every graph with pathwidth at least th+2 has treewidth at least t or contains a subdivision of a complete binary tree of height h+1. The bound th+2 is best possible up to a multiplicative constant. This result was motivated by, and implies (with c=2), the following conjecture of Kawarabayashi and Rossman (SODA'18): there exists a universal constant c such that every graph with pathwidth Ω(kc) has treewidth at least k or contains a subdivision of a complete binary tree of height k.

Our main technical algorithm takes a graph G and some (not necessarily optimal) tree decomposition of G of width t' in the input, and it computes in polynomial time an integer h, a certificate that G has pathwidth at least h, and a path decomposition of G of width at most (t'+1)h+1. The certificate is closely related to (and implies) the existence of a subdivision of a complete binary tree of height h. The approximation algorithm for pathwidth is then obtained by combining this algorithm with the approximation algorithm of Feige, Hajiaghayi, and Lee (STOC'05) for treewidth.

 

Joint work with Carla Groenland, Gwenaël Joret, and Wojciech Nadara.

18.12.13414
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of coloring a graph is assigning colors to vertices such that for each vertex v, at most half of the neighbors of v have the same color as v. Such coloring is called majority coloring of the graph. The goal is to color the graph as a majority with the fewest possible colors. During the presentation, various variants of this problem, historical results, the latest results as well as still intriguing hypotheses will be discussed. Among other things, the results of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24(3), Paper 3.57, 2017.
  6. António Girao, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Samal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15–20, 2019.

(the seminar will only be online)

23.03.81862
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of coloring a graph is assigning colors to vertices such that for each vertex v, at most half of the neighbors of v have the same color as v. Such coloring is called majority coloring of the graph. The goal is to color the graph as a majority with the fewest possible colors. During the presentation, various variants of this problem, historical results, the latest results as well as still intriguing hypotheses will be discussed. Among other things, the results of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24(3), Paper 3.57, 2017.
  6. António Girao, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Samal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15–20, 2019.

(the seminar will only be online)

05.12.29864
Bartosz Wodziński
Optymalizacja Kombinatoryczna
On the unique games conjecture

For many hard problems, instead of solving them directly, we need good approximation algorithms. Apart from good their time complexity and decent approximation factor guarantee, we would like to know whether they achieve the best possible approximation ratio (assuming P ≠ NP) possible. Unfortunately, for many NP-complete problems, there is a huge gap between best-known approximation ratio and the ratio that is proved to be unachievable in polynomial time. For instance, for Vertex Cover problem, we don't know any algorithm having a better ratio than 2, and it has been proved in 2005 that it is impossible to get a better ratio than ~1.36. As an attempt to fill in this gap, in 2002, the so-called Unique Games Conjecture was formulated by Khot. It states that having a (1-𝜀)-satisfiable instance of Unique Label Cover problem, it is NP-hard to find a solution satisfying even epsilon fraction of constraints. Assuming it, we are able to prove many tight inapproximability results, for example, it implies that Goemans-Williamson Algorithm for Max-Cut problem achieves the best possible approximation rate. It also follows that we cannot get any better ratio than 2 in the case of Vertex Cover problem. The Unique Games Conjecture is an unusual open problem since the academic world is about evenly divided on whether it is true or not. During the seminar, I will cover this conjecture in more details giving more examples of its influence and presenting recent progress in order to prove it.

  1. Subhash Khot. On the Unique Games Conjecture (Invited Survey). 2010 IEEE 25th Annual Conference on Computational Complexity, Cambridge, MA, 2010, pp. 99-121.
  2. Bartosz Wodziński. Unique Games Conjecture. slides. 2020.

(the seminar will only be online)

25.11.79123
Rafał Byczek
Optymalizacja Kombinatoryczna
Wegner’s conjecture - colouring the square of a planar graph

The square G2 of a graph G is the graph with the same vertex set in which two vertices are joined by an edge if their distance in G is at most two. The chromatic number of the square of a graph G is between D + 1 and D+ 1, where D is the maximum degree of G. Equivalently, the square coloring of a graph is to color the vertices of a graph at distance at most 2 with different colors.

In 1977, Gerd Wegner proved that the square of cubic planar graphs is 8-colorable. He conjectured that his bound can be improved - the chromatic number of G2 is at most 7, if D = 3, at most D + 5, if 4 ≤ D ≤ 7, and [3D / 2] + 1, otherwise. Wegner also gave some examples to illustrate that these upper bounds can be obtained.

C. Thomassen (2006) proved the conjecture is true for planar graphs with D = 3. The conjecture is still open for planar graphs with D ≥ 4. However several upper bounds in terms of maximum degree D have been proved as follows. In 1993, Jonas proved that χ(G2) ≤ 9D-19, for planar graphs with D ≥ 5. Agnarsson and Halldorson showed that for every planar graph G with maximum degree D ≥ 749, χ(G2) ≤ [9D / 5] + 2. Van den Heuvel and McGuinness (2003) showed that χ(G2) ≤ 2D + 25, Bordin (2002) proved that χ(G2) ≤ [9D / 5] + 1, if D ≥ 47, and Molloy and Salavatipour (2005) proved χ(G2) ≤ [5D / 3] + 78, moreover, χ(G2) ≤ [5D / 3] + 25 if D ≥ 241. Moreover, conjecture is confirmed in the case of outerplanar graphs and graphs without K4 minor.

The aim of the seminar will be to present the main facts about Wegner’s conjecture and main techniques and ideas which were used to prove some upper bounds. The presentation will be based on my master thesis.

  1. Rafał Byczek. Wegner’s conjecture Colouring the square of a planar graph. slides. 2020.

(the seminar will only be online)

14.05.76272
Szymaon Kapała
Podstawy Informatyki
Searching for shortest and least programs by Cristian Caludea, Sanjay Jain, Wolfgang Merkle and Frank Stephan
The Kolmogorov complexity of a string x is defined as the length of a shortest program p of x for some appropriate universal machine U, that is, U(p) =x and p is a shortest string with this property. Neither the plain nor the prefix-free version of Kolmogorov complexity are recursive but for both versions it is well-known that there are recursive exact Solovay functions, that is, recursive upper bounds for Kolmogorov complexity that are infinitely often tight. Let a coding function for a machine M be a function f such that f(x) is always a program of x for M. From the existence of exact Solovay functions it follows easily that for every universal machine there is a recursive coding function that maps infinitely many strings to a shortest program. Extending a recent line of research, in what follows it is investigated in which situations there is a coding function for some universal machine that maps infinitely many strings to the length-lexicographically least program. The main results which hold in the plain as well as in the prefix-free setting are the following. For every universal machine there is a recursive coding function that maps infinitely many strings to their least programs. There is a partial recursive coding function (defined in the natural way) for some universal machine that for every set maps infinitely many prefixes of the set to their least programs. Exactly for every set that is Bennett shallow (not deep), there is a recursive coding function for some universal machine that maps all prefixes of the set to their least programs. Differences between the plain and the prefix-free frameworks are obtained by considering effective sequences I_1, I_2, ...of mutually disjoint finite sets and asking for a recursive coding function for some universal machine that maps at least one string in each set I_n to its least code. Such coding functions do not exist in the prefix-free setting but exist in the plain setting in case the sets I_n are not too small.
23.01.32579
Adam Polak
Informatyka Teoretyczna
Monochromatic triangles, intermediate matrix products, and convolutions

The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O(nω) time algorithms, for ω<2.373. On the other hand, the (min,+)-product, which is at the heart of APSP, is widely conjectured to require cubic time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems, e.g. the (min,max)-product, the min-witness product, APSP in directed unweighted graphs. The best runtimes for these "intermediate" problems are all O(n(3+ω)/2). A similar phenomenon occurs for convolution problems.


Can one improve upon the running times for these intermediate problems? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way?

We make progress on these questions by providing a network of fine-grained reductions. We show for instance that APSP in directed unweighted graphs and min-witness product can be reduced to both the (min,max)-product and a variant of the monochromatic triangle problem. We also show that a natural convolution variant of monochromatic triangle is equivalent to the famous 3SUM problem. As this variant is in O(n3/2) time and 3SUM is in O(n2) time, our result gives the first fine-grained equivalence between natural problems of different running times.

Joint work with Andrea Lincoln and Virginia Vassilevska Williams.

23.12.76275
Piotr Mikołajczyk
Podstawy Informatyki
Satisfiability in Strategy Logic can be Easier than Model Checking by Erman Acar, Massimo Benerecetti and Fabio Mogavero.

In the design of complex systems, model-checking and satisfiability arise as two prominent decision problems. While
model-checking requires the designed system to be provided in advance, satisfiability allows to check if such a system even exists. With very few exceptions, the second problem turns out to be harder than the first one from a complexity-theoretic standpoint. In this paper, we investigate the connection between the two problems for a non-trivial fragment of Strategy Logic (SL, for short). SL extends LTL with first-order quantifications over strategies, thus allowing to explicitly reason about the strategic abilities of agents in a multi-agent system. Satisfiability for the full logic is known to be highly undecidable, while model-checking is non-elementary.

The SL fragment we consider is obtained by preventing strategic quantifications within the scope of temporal operators. The resulting logic is quite powerful, still allowing to express important game-theoretic properties of multi-agent systems, such as existence of Nash and immune equilibria, as well as to formalize the rational synthesis problem. We show that satisfiability for such a fragment is PSPACE-COMPLETE, while its model-checking complexity is 2EXPTIME-HARD. The result is obtained by means of an elegant encoding of the problem into the satisfiability of conjunctive-binding first-order logic, a recently discovered decidable fragment of first-order logic.

31.07.48951
Piotr Helm, Krzysztof Zysiak

Optimal Sorting with Persistent Comparison Errors [B. Geissmann et al.]
Rozważamy problem sortowania n elementów w przypadku stałego błędu porównań. W tym problemie, każde porównanie między dwoma elementami może się pomylić ze stałym (małym) prawdopodobieństwem, i porównania nie mogą zostać powtórzone. Perfekcyjne posortowanie w tym modelu jest niemożliwe i celem jest zminimalizowanie dyslokacji każdego z elementów w zwróconym ciągu, czyli odległość od jego poprawnej pozycji. Istniejące ograniczenia dolne dla tego problemu pokazują, że żaden algorytm nie zagwarantuje z wysokim prawdopodobieństwem maksymalnej i sumarycznej dyslokacji lepszej niż Ω(logn) i Ω(n), odpowiednio, bez względu na czas działania.
W tej pracy, prezentujemy pierwszy sortujący algorytm o złożoności O(n log n), który gwarantuje zarówno maksymalna dyslokację O(log n), jak i sumaryczną dyslokację O(n) z wysokim prawdopodobieństwem. To rozstrzyga złożoność czasową tego problemu i pokazuje, że błędy porównań nie zwiększają jego złożoności czasowej: ciąg z najlepszą możliwą dyslokacją może zostać uzyskany w czasie O(n logn), i nawet bez błędów porównań czas Ω(n log n) jest konieczny, by zagwarantować takie ograniczenia dyslokacji.
Aby osiągnąć ten optymalny wynik, rozwiązujemy dwa podproblemy, za pomocą metod, które mogą mieć dalsze, osobne zastosowania. Jednym z nich jest zlokalizowanie pozycji, na którą należy wstawić element do prawie posortowanego ciągu o dyslokacji O(log n)  w taki sposób, aby dyslokacja zwracanego ciągu wciąż była O(logn). Drugi problem - jak równocześnie wstawić m elementów w prawie posortowany ciąg innych m elementów, tak aby zwracany ciąg 2m elementów pozostał prawie posortowany.
27.12.29840
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of graph coloring is the assignment of colors to vertices such that for each v vertex, at most half of the neighbors v have the same color as v. Such coloring is called the majority coloring of the graph. The goal is to color the graph in the above way with the smallest number of colors. During the presentation various variants of this problem will be discussed, historical results, the latest results, and still intriguing hypotheses. Among other things, the effects of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24 (3), Paper 3.57, 2017.
  6. António Girão, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Šámal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15 – 20, 2019.
23.06.43420
Przemysław Rutka (Lublin)
Podstawy Informatyki
Wybrane algorytmiczne zastosowania klasycznych wielomianów ortogonalnych
Klasyczne wielomiany ortogonalne, odpowiadające im klasyczne funkcje wagowe oraz ich własności znajdują wiele zastosowań w takich chociażby obszarach jak tomografia, mechanika kwantowa, kombinatoryka, przetwarzanie obrazów i sygnałów, kompresja danych oraz zwiększanie wydajności algorytmów. W tym ostatnim zakresie cały czas uzyskuje się wiele ciekawych wyników, pozwalających na efektywne numeryczne rozwiązywanie różnych problemów. Można do tych problemów w szczególności zaliczyć barycentryczne interpolacje Fejéra, Hermite'a i Lagrange'a oraz problemy ekstremalne typu Szegő i Markowa-Bernsteina. W pierwszym przypadku, gdy interpolowanych jest n wartości w węzłach, będących zerami klasycznych wielomianów ortogonalnych, możliwa jest poprawa złożoności obliczeniowej algorytmów, obliczających wartości wielomianów interpolacyjnych w oparciu o wzory barycentryczne, z O(n^2) do O(n). Wymagane jest w tym celu zastosowanie odpowiednich jawnych wzorów na wagi barycentryczne lub wzorów wiążących wagi barycentryczne z wagami i węzłami kwadratur Gaussa. Z kolei w drugim przypadku, jak się okazuje powiązanym z pierwszym, daje się sformułować wzory, pozwalające bezpośrednio obliczać na komputerze najlepsze stałe, występujące w nierównościach typu Szegő i Markowa-Bernsteina oraz wartości wielomianów ekstremalnych, dla których te nierówności stają się równościami. Nierówności te związane są z iterowanymi klasycznymi funkcjami wagowymi i można je wykorzystać do szacowania wartości lub norm pochodnych D^{k}p lub różnic progresywnych Δ^{k}p wielomianów p(x), odpowiednio w przypadku ciągłym lub dyskretnym.
     Inne tego typu rezultaty, korzystające z klasycznych wag i/lub klasycznych wielomianów ortogonalnych, można otrzymać także dla problemu typu izoperymetrycznego w klasie płaskich, zamkniętych krzywych wielomianowych, problemu równowagi elektrostatycznej układu ładunków, problemu efektywnej, stabilnej i najbardziej ekonomicznej interpolacji oraz problemu dwustronnych oszacowań aproksymacyjnych a priori typu Chernoffa.
09.03.68171
Bartłomiej Bosek
Informatyka Teoretyczna
Algorithms for posets and graphs games – coloring and matching

Graph colorings and online algorithms on graphs constitute the key fragments of the algorithmic graph theory. Specifically, the subject of this study will be a presentation of the results concerning

  • different variants of coloring of graphs and partially ordered sets,
  • online coloring of partially ordered sets,
  • incremental matchings of bipartite graphs.

The first part of the talk will concern different aspects of the coloring problem as well as different evidential techniques. The presented results concern majority choosability of digraphs, harmonious coloring of hypergraphs and semi-uni conjecture of product of two posets. The next part of presentation will concern online chain partitioning of posets. There will be presented a full characterization of the class of posets, for which the number of colors (chains) used by first-fit is a function of width, i.e. best offline solution. This part will also present two different subexponential online algorithm for the online chain partitioning problem. The last part will concern the incremental matching problem in bipartite graphs. There will be presented an incremental algorithm that maintains the maximum size matching in total time equal the running time of one of the fastest offline maximum matching algorithm that was given by Hopcroft and Karp. Moreover, I will show an analysis of the shortest augmenting path algorithm.

This is joint work with Marcin Anholcer, Jarosław Grytczuk, Sebastian Czerwiński, Paweł Rzążewski, Stefan Felsner, Kolja Knauer, Grzegorz Matecki, Tomasz Krawczyk, H. A. Kierstead, Matthew Smith, Dariusz Leniowski, Piotr Sankowski, Anna Zych-Pawlewicz.

29.09.76384
Michał Wrona
Informatyka Teoretyczna
Relational Width of First-Order Expansions of Homogeneous Graphs with Bounded Strict Width

We study the amount of consistency (measured by relational width) needed to solve the CSP parametrized by first-order expansions of countably infinite homogeneous graphs, that are, the structures first-order-definable in a homogeneous graph containing the edge relation E, the relation N that holds between different vertices not connected by an edge and the equality. We study our problem for structures that additionally have bounded strict width, i.e., establishing local consistency of an instances of the CSP not only decides if there is a solution but also ensures that every solution may be obtained from a locally consistent instance by greedily assigning values to variables, without backtracking. It is known that with every countably infinite homogeneous graph G the finite unique minimal set S of finite graphs is associated such that some finite H is an induced substructure of G if and only if there is no H' in S such that H' embeds into H.

Our main result is that structures under consideration have relational width exactly (2,L) where L is the maximal size of a forbidden subgraph in S, but not smaller than 3. It implies, in particular, that the CSP parametrized by such structures may be solved in time O(nm) where n is the number of variables and m is the maximum of L and the arities of relations in the signature, while strict width l ensures time O(nl+1). Furthermore, since L may be arbitrarily large, our result contrasts the collapse of the relational bounded width hierarchy for finite structures, in which case, if finite, relational width is always at most (2,3). 

Finally, we show that certain maximally tractable constraint languages with a first-order definition in a countably infinite homogeneous graph whose CSPs are already known to be solvable in polynomial time by other algorithms may be also solved by establishing consistency. Thus, providing an alternative algorithm for already studied problems.