Postdoc positions open up in irregular intervals. If you're interested in doing a postdoc in our group, send me an email!
Title | Authors | Conference | Link |
---|---|---|---|
Tight Lower Bounds in the Supported LOCAL Model | Alkida Balliu, Thomas Boudier, Sebastian Brandt, Dennis Olivetti | PODC | Link |
Completing the Node-Averaged Complexity Landscape of LCLs on Trees | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, Dennis Olivetti, and Gustav Schmid | PODC | Link |
Brief Announcement: Local Advice and Local Decompression | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, Krzysztof Nowicki, Dennis Olivetti, Eva Rotenberg, and Jukka Suomela | PODC | Link Full Version |
Title | Authors | Conference | Link |
---|---|---|---|
On the Node-Averaged Complexity of Locally Checkable Problems on Trees
Best Paper Award at DISC'23 |
Alkida Balliu, Sebastian Brandt, Fabian Kuhn, Dennis Olivetti, and Gustav Schmid | DISC | Link |
Distributed Maximal Matching and Maximal Independent Set on Hypergraphs | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti | SODA | Link |
Title | Authors | Conference | Link |
---|---|---|---|
Distributed Δ-Coloring Plays Hide-and-Seek | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti | STOC | Link |
Distributed Edge Coloring in Time Polylogarithmic in Δ | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti | PODC | Link |
The Landscape of Distributed Complexities on Trees and Beyond | Christoph Grunau, Václav Rozhoň, and Sebastian Brandt | PODC | Link |
Exponential Speedup Over Locality in MPC with Optimal Memory | Alkida Balliu, Sebastian Brandt, Manuela Fischer, Rustam Latypov, Yannic Maus, Dennis Olivetti, and Jara Uitto | DISC | Link |
Efficient Classification of Locally Checkable Problems in Regular Trees | Alkida Balliu, Sebastian Brandt, Yi-Jun Chang, Dennis Olivetti, Jan Studený, and Jukka Suomela | DISC | Link |
Local Problems on Trees from the Perspectives of Distributed Algorithms, Finitary Factors, and Descriptive Combinatorics | Sebastian Brandt, Yi-Jun Chang, Jan Grebík, Christoph Grunau, Václav Rozhoň, and Zoltán Vidnyánszky | ITCS | Link |
Title | Authors | Conference | Link |
---|---|---|---|
Improved Distributed Lower Bounds for MIS and Bounded (Out-)Degree Dominating Sets in Trees | Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti | PODC | Link |
The Randomized Local Computation Complexity of the Lovász Local Lemma | Sebastian Brandt, Christoph Grunau, and Václav Rozhoň | PODC | Link |
Locally Checkable Problems in Rooted Trees | Alkida Balliu, Sebastian Brandt, Dennis Olivetti, Jan Studený, Jukka Suomela, and Aleksandr Tereshchenko | PODC | Link |
Efficient Load-Balancing through Distributed Token Dropping | Sebastian Brandt, Barbara Keller, Joel Rybicki, Jukka Suomela, and Jara Uitto | SPAA | Link |
Brief Announcement: Memory Efficient Massively Parallel Algorithms for LCL Problems on Trees | Sebastian Brandt, Rustam Latypov, and Jara Uitto | DISC | Link Full Version |
Title | Authors | Conference | Link |
---|---|---|---|
Distributed Lower Bounds for Ruling Sets | Alkida Balliu, Sebastian Brandt, and Dennis Olivetti | FOCS | Link |
Truly Tight-in-Δ Bounds for Bipartite Maximal Matching and Variants | Sebastian Brandt and Dennis Olivetti | PODC | Link |
How Much Does Randomness Help with Locally Checkable Problems? | Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela | PODC | Link |
Generalizing the Sharp Threshold Phenomenon for the Distributed Complexity of the Lovász Local Lemma | Sebastian Brandt, Christoph Grunau, and Václav Rozhoň | PODC | Link |
Tight Bounds for Deterministic High-Dimensional Grid Exploration | Sebastian Brandt, Julian Portmann, and Jara Uitto | DISC | Link |
Classification of Distributed Binary Labeling Problems | Alkida Balliu, Sebastian Brandt, Yuval Efron, Juho Hirvonen, Yannic Maus, Dennis Olivetti, and Jukka Suomela | DISC | Link |
Title | Authors | Conference | Link |
---|---|---|---|
Lower Bounds for Maximal Matchings and Maximal Independent Sets
Best Paper Award at FOCS'19 |
Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela | FOCS | Link |
An Automatic Speedup Theorem for Distributed Problems | Sebastian Brandt | PODC | Link |
A Sharp Threshold Phenomenon for the Distributed Complexity of the Lovász Local Lemma | Sebastian Brandt, Yannic Maus, and Jara Uitto | PODC | Link |
The Distributed Complexity of Locally Checkable Problems on Paths Is Decidable | Alkida Balliu, Sebastian Brandt, Yi-Jun Chang, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela | PODC | Link |
Massively Parallel Computation of Matching and MIS in Sparse Graphs | Soheil Behnezhad, Sebastian Brandt, Masha Derakhshan, Manuela Fischer, MohammadTaghi Hajiaghayi, Richard M. Karp, and Jara Uitto | PODC | Link |
Breaking the Linear-Memory Barrier in MPC: Fast MIS on Trees with Strongly Sublinear Memory
Best Student Paper Award at SIROCCO'19 |
Sebastian Brandt, Manuela Fischer, and Jara Uitto | SIROCCO | Link |
Title | Authors | Conference | Link |
---|---|---|---|
Almost Global Problems in the LOCAL Model | Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela | DISC | Link |
A Tight Lower Bound for Semi-Synchronous Collaborative Grid Exploration | Sebastian Brandt, Jara Uitto, and Roger Wattenhofer | DISC | Link |
Fine-grained Lower Bounds on Cops and Robbers | Sebastian Brandt, Seth Pettie, and Jara Uitto | ESA | Link |
Title | Authors | Conference | Link |
---|---|---|---|
LCL Problems on Grids | Sebastian Brandt, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Patric R. J. Östergard, Christopher Purcell, Joel Rybicki, Jukka Suomela, and Przemysław Uznański | PODC | Link |
A Tight Lower Bound for the Capture Time of the Cops and Robbers Game | Sebastian Brandt, Yuval Emek, Jara Uitto, and Roger Wattenhofer | ICALP | Link |
Approximating Small Balanced Vertex Separators in Almost Linear Time
Best Student Paper Award at WADS'17 |
Sebastian Brandt and Roger Wattenhofer | WADS | Link |
Collaboration Without Communication: Evacuating Two Robots from a Disk | Sebastian Brandt, Felix Laufenberg, Yuezhou Lv, David Stolz, and Roger Wattenhofer | CIAC | Link |
Wireless Evacuation on m Rays with k Searchers | Sebastian Brandt, Klaus-Tycho Foerster, Benjamin Richner, and Roger Wattenhofer | SIROCCO | Link |
Title | Authors | Conference | Link |
---|---|---|---|
A Lower Bound for the Distributed Lovász Local Lemma | Sebastian Brandt, Orr Fischer, Juho Hirvonen, Barbara Keller, Tuomo Lempiäinen, Joel Rybicki, Jukka Suomela, and Jara Uitto | STOC | Link |
On Consistent Migration of Flows in SDNs | Sebastian Brandt, Klaus-Tycho Förster, and Roger Wattenhofer | INFOCOM | Link |
Augmenting Anycast Network Flows | Sebastian Brandt, Klaus-Tycho Förster, and Roger Wattenhofer | ICDCN | Link |
Title | Authors | Conference | Link |
---|---|---|---|
Toehold DNA Languages are Regular | Sebastian Brandt, Nicolas Mattia, Jochen Seidel, and Roger Wattenhofer | ISAAC | Link |
Title | Authors | Journal | Year | Link |
---|---|---|---|---|
Locally Checkable Problems in Rooted Trees | Alkida Balliu, Sebastian Brandt, Yi-Jun Chang, Dennis Olivetti, Jan Studený, Jukka Suomela, and Aleksandr Tereshchenko | Distributed Computing | 2023 | Link |
Distributed Lower Bounds for Ruling Sets | Alkida Balliu, Sebastian Brandt, and Dennis Olivetti | SIAM Journal on Computing | 2022 | Link |
Lower Bounds for Maximal Matchings and Maximal Independent Sets | Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela | Journal of the ACM | 2021 | Link |
Breaking the Linear-Memory Barrier in MPC: Fast MIS on Trees with Strongly Sublinear Memory | Sebastian Brandt, Manuela Fischer, and Jara Uitto | Theoretical Computer Science | 2021 | Link |
Almost Global Problems in the LOCAL Model | Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela | Distributed Computing | 2021 | Link |
A Tight Lower Bound for Semi-Synchronous Collaborative Grid Exploration | Sebastian Brandt, Jara Uitto, and Roger Wattenhofer | Distributed Computing | 2020 | Link |
A Tight Lower Bound for the Capture Time of the Cops and Robbers Game | Sebastian Brandt, Jara Uitto, and Roger Wattenhofer | Theoretical Computer Science | 2020 | Link |
Wireless Evacuation on m Rays with k Searchers | Sebastian Brandt, Klaus-Tycho Foerster, Benjamin Richner, and Roger Wattenhofer | Theoretical Computer Science | 2020 | Link |
Online Graph Exploration on a Restricted Graph Class: Optimal Solutions for Tadpole Graphs | Sebastian Brandt, Klaus-Tycho Foerster, Jonathan Maurer, and Roger Wattenhofer | Theoretical Computer Science | 2020 | Link |
Approximating Small Balanced Vertex Separators in Almost Linear Time | Sebastian Brandt and Roger Wattenhofer | Algorithmica | 2019 | Link |
Augmenting Flows for the Consistent Migration of Multi-Commodity Single-Destination Flows in SDNs | Sebastian Brandt, Klaus-Tycho Foerster, and Roger Wattenhofer | Pervasive and Mobile Computing | 2017 | Link |
For everyone who wants to know more about what kinds of problems we study and what kinds of questions we ask, let me try to give you an introduction to what we're interested in (for another introduction with lots of pictures, see here). To make things as accessible as possible, we will take a look at a typical problem we would like to solve and explain some settings and concepts along the way. Let me say up front that this will not give you anything resembling a comprehensive overview of the (fairly vast) field of distributed algorithms; what we will discuss below will hopefully give you a good first idea of what distributed graph algorithms are about, but we won't have time to touch upon many fascinating research directions of the distributed world, such as distributed complexity theory, distributed lower bounds, the role of randomness, and many more. Alright, let me say at least a few words about the three directions I mentioned, before going to the meat of the introduction and looking at a concrete problem. If some of the terms I'm using do not make sense right away, I hope they will after reading the later part containing the proper introduction.
In distributed complexity theory, we want to map the landscape of time complexities that distributed problems can exhibit. One recent highlight is that for certain problem classes we now fully understand how this landscape looks like; fascinatingly, there are large gaps in the complexity landscape that one can use for speed-up results (just show that the time complexity of a problem is below the higher end of the gap, then we know that its complexity is at most the lower end of the gap, and usually this argumentation even produces an algorithm with a corresponding runtime).
Regarding distributed lower bounds (about which not too much was known, say, 7 years ago), the last years have seen an explosion of new results and techniques, and have shown that creativity is as important in proving impossibility results as in designing algorithms. A very successful technique, in its simplest form, transforms any given (too) fast algorithm into an even faster one, showing that such fast algorithms cannot exist. Another very creative technique consists in transforming a supposedly existing algorithm into a set of two-player games and obtaining a contradiction by showing that at least one of the games does not have a winning strategy for either player.
Regarding the role of randomness, there are intriguing results about how much randomness can help in solving a problem (e.g., often not more than exponentially), but as in the previous two research directions, there are many open questions. Some of these are related to a fascinating use of randomness in algorithm design, called shattering, where the idea is to have a fairly simple randomized process that shatters the input graph into small pieces, which then can be solved with the best available deterministic algorithm.
In all three directions, and in distributed graph algorithms in general, there is a lot we already understand, but even more that we don't know how to do. There have been a number of breakthroughs in recent years that provide new tools and techniques to work with, but for many open problems it is expected that something new is needed, e.g., some creative algorithmic idea that no one has thought of or a different way to look at things. I consider myself fortunate that I happened to stumble into this world at an almost ideal point in time: currently, a lot is happening in distributed algorithms, and our group will gladly contribute to this progress.
After this slight deviation, let me come back to the initial plan of showing you a typical problem that we would like to solve. This won't be just some arbitrary problem, but one of the really big open problems in our area. Still, after laying a bit of groundwork, it is not hard to describe. In fact, one could immediately start thinking about possible approaches for solving it, without knowing anything about techniques in our area. Of course, knowing some techniques will usually help a lot, but it isn't impossible that some fresh and unbiased idea is exactly what is needed to make progress.
In the area of distributed graph algorithms, which is currently our group's main research focus, the problems we want to solve are usually, well, graph problems. The problem we will be looking at is a simple graph coloring problem: we want to color the vertices of whatever graph is given to us with $\Delta + 1$ colors such that no two neighbors receive the same color. Here, $\Delta$ denotes the maximum degree of the graph, i.e., the degree of the vertex with the largest number of neighbors. A simple greedy argument shows that a vertex coloring with $\Delta + 1$ colors always exists: one can simply iterate through the vertices in an arbitrary order, one by one, and assign each vertex a color that none of its neighbors has received so far. Since each vertex has at most $\Delta$ neighbors, such a color always exists. Note that this argument doesn't work anymore if we have only $\Delta$ colors available; in fact there are graphs that do not admit a $\Delta$-coloring.
While knowing that a $(\Delta + 1)$-coloring exists is already nice, as algorithm researchers we want to know more, namely, how and how fast we can compute such a $(\Delta + 1)$-coloring. However, as researchers in distributed algorithms, we are not satisfied with finding just some fast coloring algorithm: we want to design a fast algorithm that computes such a coloring in a distributed fashion. What does that mean? Let me explain our setting.
Imagine that each vertex of the input graph is a separate computational entity and that each edge is a communication link between the two entities corresponding to its endpoints. In other words, you can imagine the graph to be a computer network, and all computers work together to solve the given problem by sending around messages. In our case, each computer would like to choose a color that no neighbor has. Doesn't sound so complicated? If we don't care about how long it takes, that's correct. But we would like to have an algorithm that is very fast as a function of the size of the network, as nowadays networks can be very large (think of the internet). What does "fast" mean? And how exactly do these computers work together? Alright, let's be a bit more precise.
The joint computation performed by the vertices (i.e., by the associated computational devices) proceeds in synchronous rounds $r = 1, 2, 3, \ldots$. In each round each vertex is allowed to send an arbitrary large message to each of its neighbors. Moreover, at any point in time any vertex can (internally) perform any computation on the knowledge it has gathered so far (e.g., in order to decide which message to send in the next round). Each vertex has to decide at some point that it terminates upon which it has to provide its output (which in the $(\Delta + 1)$-coloring problem would be its own color) and doesn't participate anymore in the computation. What we would like to know (and what we call the runtime of the computation/algorithm) is simply how many synchronous rounds it takes until the last vertex terminates. Of course we want the vertices to terminate as fast as possible, but always under the condition that the (combined) provided output is correct (e.g., in our case it is required that the combined output constitutes a proper $(\Delta + 1)$-coloring). The focus on the number of rounds implies that internal computation is essentially free; all we care about (for now) is how many message-passing rounds we need. (For everyone from the maths side: there is an equivalent and very clean way to define the setting in a non-algorithmic fashion; we will provide details further below.)
As algorithm designers, our task is to tell the vertices what messages they should send around and which internal computations they should perform (before we know anything about the input graph). In other words, we are supposed to provide one algorithm, one list of instructions, that is given to and executed by every vertex of the input graph. Of course, this list can contain instructions that are dependent on, e.g., the knowledge or degree of the vertex executing the instructions; for instance, an instruction could tell a vertex to perform some action only if its degree is at least $5$, or if it already received a certain kind of message. Due to the fact that each vertex executes the same algorithm, it is usually assumed that each vertex has a unique numerical identifier, i.e., each vertex is given some positive integer that no other vertex is given (but we have no control over which vertex gets which identifier). This ensures that even in very symmetrical input graphs, different vertices may perform different actions despite executing the same algorithm.
Let's have a look at a simple example algorithm. Say, we are guaranteed that the input graph is a directed cycle and we would like to compute a coloring with at most $6$ colors. Here is a very fast algorithm for this problem, devised by Cole and Vishkin in the '80s.
Initially each vertex interprets its unique identifier as a color (where a color is nothing else than some positive integer). Then, in each round, each vertex chooses a new color such that the size of the color space shrinks, until it has size $6$. More specifically, the colors are updated in each round by doing the following. First, each vertex sends its own color to its parent. Then, each vertex computes the smallest index $i$ in which its own color, represented as a bit string, and the color it received from its child, also represented as a bit string, differ. Finally, each vertex updates its color $c$ to the new color $2i+c(i)$, where $c(i)$ denotes the bit of color $c$ at index $i$.
We claim that after each round, the new coloring is also proper. Our argument is that each vertex $v$ makes sure that it has a different color than its child. Why is that the case? Well, by the definition of $i$, if the child of $v$ also happens to use the same index $i$ for its new color, then the bit it adds will not be the same as the bit $c(i)$ that $v$ adds. Hence, $v$ and its child differ either in $i$ or in $c(i)$ (or in both), which ensures that their new colors are different. So we always preserve a proper coloring.
Now, how large is the runtime of our algorithm? Well, roughly speaking, since as the new color we choose an index of a bit string that represents the old color, the new color is exponentially smaller than the old one! (The factor $2$ and the added bit can essentially be ignored for the analysis (they do not change the number of rounds asymptotically), but they are relevant in limiting how small we can make our final color space: if the color space has size $6$, then it doesn't shrink anymore.) We conclude that even if the space from which our unique identifiers (that we use as the initial coloring) come is very large (say, the number of atoms in the observable universe), the algorithms is very very fast. If the identifier space is called $S$, then, up to a constant factor, the number of rounds our algorithm takes is $\log^* S$, which is the minimum number of times one has to iteratively apply the logarithm function to $S$ to reach a value that is at most $1$. If $S = 10^{80}$, then $\log^* S = 5$, so our algorithm is really fast. Usually, the identifier space is chosen to be of size polynomial in the number $n$ of vertices of the input graph, in which case we can write the runtime of our algorithm as a function of $n$ (as desired): the algorithm terminates in $O(\log^* n)$ rounds.
After discussing the setting and looking at an example algorithm, let's now come back to the problem of computing a $(\Delta + 1)$-coloring. The best (deterministic) upper bound currently known is due to a result by Ghaffari and Kuhn from 2021. It states that a $(\Delta + 1)$-coloring can be computed (in the setting explained above) in $O(\log^3 n)$ rounds. This is already a decent runtime, but the best lower bound that is known is just a lower bound of $\Omega(\log^* n)$ (see, e.g., here)! So there is quite a large gap between lower and upper bound, and according to our current knowledge it is still possible that there is an $O(\log^* n)$-round algorithm for $(\Delta + 1)$-coloring, i.e., an incredibly fast algorithm.
It is worth mentioning that we are interested in this problem not only because there is a large gap in our knowledge and potential for a very fast algorithm. No, $(\Delta + 1)$-coloring is also (or rather primarily) a very important problem because of its usefulness for solving other problems. There are many distributed algorithms (for very different problems) that start by computing a $(\Delta + 1)$-coloring and then, e.g., use the computed colors as a schedule for further computations. Hence, we can phrase one of the big open problems in the area of distributed algorithms as follows.
How fast can we compute a $(\Delta + 1)$-coloring distributedly?
Any progress in improving the upper or lower bounds would be of great interest to the distributed community! E.g., turning the $O(\log^3 n)$-round algorithm into an $O(\log^2 n)$-round algorithm or just improving the lower bound from $\Omega(\log^* n)$ to $\Omega((\log^* n)^2)$ would likely provide fascinating new insights into this important problem.
While this is all I wanted to say about $(\Delta + 1)$-coloring, let me come back to something I mentioned earlier, namely that there is an equivalent non-algorithmic way of defining our setting. If you think a bit about which messages a vertex should send to its neighbors, then you will realize that due to the fact that we do not restrict the message size in any way, each vertex should always send everything it has learned so far. If each vertex follows this rule, then in a $T$-round algorithm every vertex learns exactly its $T$-hop neighborhood in the graph (obviously it can learn nothing beyond distance $T$ since the information doesn't have enough time to travel that far). In other words, each vertex decides on its output purely by looking at its $T$-hop neighborhood. As such, we can understand a $T$-round algorithm as a function from the space of all possible vertex $T$-hop neighborhoods to the space of allowed vertex outputs (where the function describes what the center vertex of the $T$-hop neighborhood outputs). Having both the algorithmic message-passing definition and the non-algorithmic function definition is very useful; depending on the studied problem and the objective (algorithm, lower bound, ...), it could be either perspective that makes it easier to come up with a good approach (while technically one can always rephrase an algorithm given in the one formalism in the other formalism and vice versa).
I hope that I managed to give you at least a rough idea of what kinds of problems and questions we are interested in. Of course, what we have seen so far constitutes only a small part of the types of problems, questions, and settings that we study; as mentioned before, distributed algorithms is a vast field with lots of exciting open problems! If you want to learn more about distributed algorithms, have a look at some of the excellent lecture notes accompanying courses in this area (such as these or these).