Sebastian Brandt

I am a tenure-track faculty member at the Helmholtz Center for Information Security (CISPA) in Saarbruecken, Germany.

Research Interests

My research tackles questions from the area of theoretical computer science, mostly revolving around topics in algorithm design and analysis.

One of my main goals is to understand the nature of locality in algorithms: What can be computed with access to only a small, local part of the input data? What are the fundamental limitations due to locality, and how can we prove them? Naturally, my research focuses on distributed and parallel algorithms, but I am also interested in understanding how locality affects computation in general, and extending local techniques to a broader range of areas in computer science.

Other more-or-less related topics I am curious about and have worked on include computability/decidability, massively parallel computation, biologically-inspired algorithms, and mobile agents. Moreover, I do not feel constrained by area tags. If I encounter some interesting problem from a completely different field, I am happy to take a shot at it and collaborate with people knowing more about it than I do.

If you would like to know more about distributed algorithms and the kinds of questions we study, please take a look at the introduction here.

About Me

Prior to joining CISPA, I was a postdoc in the Discrete and Distributed Algorithms Group at ETH Zurich, led by Mohsen Ghaffari. I received my PhD from ETH in the beginning of 2018, under the supervision of Roger Wattenhofer.
img

Email:

Google Scholar
DBLP





We have several open positions for PhD students and postdocs!

Open Positions

Several PhD and Postdoc positions are available in our group! We are looking for bright and motivated students and researchers interested in working on distributed and parallel algorithms. Our newly founded group aims for being one of the leading reseach groups in the area of distributed and parallel algorithms, with a strong emphasis on cutting-edge research. For an introduction to our research, please take a look here.

PhD positions: The ideal candidate has an excellent master's (or, in exceptional cases, bachelor's) degree in computer science or a related field. We expressly encourage bright students from maths (in particular pure maths) to apply if they are interested in algorithmic questions. A background in distributed or parallel algorithms is not necessary.

Postdoc positions: The ideal candidate has a strong background in algorithmic research, with publications in leading TCS conferences or journals, and is interested in working in the area of distributed and parallel algorithms. The candidate is expected to have a doctoral degree, or to complete her/his PhD soon. The postdoc positions are typically for 1 to 2 years, but can possibly be extended. We encourage postdocs to develop their own research agenda.

What we offer: All positions are fully funded and come with excellent working conditions, such as a competitive salary, social security, 30 days of vacation, and generous travel support. Saarbruecken is considered as one of the top locations for computer science in Europe, with the Saarland Informatics Campus being home to Saarland University and several leading research institutes (such as the Helmholtz Center, two Max Planck Institutes, a Fraunhofer Institute, and more), offering a vibrant atmosphere and ideal conditions for collaboration. Beyond that, our group has established strong ties to other leading research groups across Europe, creating further potential for collaboration and research visits.

Our group is committed to providing a healthy work environment and fostering diversity and respectful interaction. We welcome applications by candidates from all backgrounds.

Participation in teaching is encouraged, but not required. The starting dates of the positions are flexible although we would prefer to fill them as soon as possible. Knowledge of German is not necessary.

We will accept applications until the positions are filled. However, for full consideration, please submit your application by June 28, 2022, either here (PhD positions) or here (postdoc positions). The linked application pages also specify the required (marked with a star) and optional documents.

Please explicitly indicate in your cover letter that the application is for the group of Sebastian Brandt (as the application portal is used CISPA-wide).

If you have any questions, don't hesitate to contact Sebastian Brandt (). Informal inquiries are welcome.

Teaching

Summer 2022

Professional Activities

Publications


Conference Papers

2022

TitleAuthorsConferenceLink
Distributed Δ-Coloring Plays Hide-and-Seek Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti STOC Link
Distributed Edge Coloring in Time Polylogarithmic in Δ Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti PODC
The Landscape of Distributed Complexities on Trees and Beyond Christoph Grunau, Václav Rozhoň, and Sebastian Brandt PODC Link
Local Problems on Trees from the Perspectives of Distributed Algorithms, Finitary Factors, and Descriptive Combinatorics Sebastian Brandt, Yi-Jun Chang, Jan Grebík, Christoph Grunau, Václav Rozhoň, and Zoltán Vidnyánszky ITCS Link

2021

TitleAuthorsConferenceLink
Improved Distributed Lower Bounds for MIS and Bounded (Out-)Degree Dominating Sets in Trees Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti PODC Link
The Randomized Local Computation Complexity of the Lovász Local Lemma Sebastian Brandt, Christoph Grunau, and Václav Rozhoň PODC Link
Locally Checkable Problems in Rooted Trees Alkida Balliu, Sebastian Brandt, Dennis Olivetti, Jan Studený, Jukka Suomela, and Aleksandr Tereshchenko PODC Link
Efficient Load-Balancing through Distributed Token Dropping Sebastian Brandt, Barbara Keller, Joel Rybicki, Jukka Suomela, and Jara Uitto SPAA Link
Brief Announcement: Memory Efficient Massively Parallel Algorithms for LCL Problems on Trees Sebastian Brandt, Rustam Latypov, and Jara Uitto DISC Link Full Version

2020

TitleAuthorsConferenceLink
Distributed Lower Bounds for Ruling Sets Alkida Balliu, Sebastian Brandt, and Dennis Olivetti FOCS Link
Truly Tight-in-Δ Bounds for Bipartite Maximal Matching and Variants Sebastian Brandt and Dennis Olivetti PODC Link
How Much Does Randomness Help with Locally Checkable Problems? Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela PODC Link
Generalizing the Sharp Threshold Phenomenon for the Distributed Complexity of the Lovász Local Lemma Sebastian Brandt, Christoph Grunau, and Václav Rozhoň PODC Link
Tight Bounds for Deterministic High-Dimensional Grid Exploration Sebastian Brandt, Julian Portmann, and Jara Uitto DISC Link
Classification of Distributed Binary Labeling Problems Alkida Balliu, Sebastian Brandt, Yuval Efron, Juho Hirvonen, Yannic Maus, Dennis Olivetti, and Jukka Suomela DISC Link

2019

TitleAuthorsConferenceLink
Lower Bounds for Maximal Matchings and Maximal Independent Sets
Best Paper Award at FOCS'19
Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela FOCS Link
An Automatic Speedup Theorem for Distributed Problems Sebastian Brandt PODC Link
A Sharp Threshold Phenomenon for the Distributed Complexity of the Lovász Local Lemma Sebastian Brandt, Yannic Maus, and Jara Uitto PODC Link
The Distributed Complexity of Locally Checkable Problems on Paths Is Decidable Alkida Balliu, Sebastian Brandt, Yi-Jun Chang, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela PODC Link
Massively Parallel Computation of Matching and MIS in Sparse Graphs Soheil Behnezhad, Sebastian Brandt, Masha Derakhshan, Manuela Fischer, MohammadTaghi Hajiaghayi, Richard M. Karp, and Jara Uitto PODC Link
Breaking the Linear-Memory Barrier in MPC: Fast MIS on Trees with Strongly Sublinear Memory
Best Student Paper Award at SIROCCO'19
Sebastian Brandt, Manuela Fischer, and Jara Uitto SIROCCO Link

2018

TitleAuthorsConferenceLink
Almost Global Problems in the LOCAL Model Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela DISC Link
A Tight Lower Bound for Semi-Synchronous Collaborative Grid Exploration Sebastian Brandt, Jara Uitto, and Roger Wattenhofer DISC Link
Fine-grained Lower Bounds on Cops and Robbers Sebastian Brandt, Seth Pettie, and Jara Uitto ESA Link

2017

TitleAuthorsConferenceLink
LCL Problems on Grids Sebastian Brandt, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Patric R. J. Östergard, Christopher Purcell, Joel Rybicki, Jukka Suomela, and Przemysław Uznański PODC Link
A Tight Lower Bound for the Capture Time of the Cops and Robbers Game Sebastian Brandt, Yuval Emek, Jara Uitto, and Roger Wattenhofer ICALP Link
Approximating Small Balanced Vertex Separators in Almost Linear Time
Best Student Paper Award at WADS'17
Sebastian Brandt and Roger Wattenhofer WADS Link
Collaboration Without Communication: Evacuating Two Robots from a Disk Sebastian Brandt, Felix Laufenberg, Yuezhou Lv, David Stolz, and Roger Wattenhofer CIAC Link
Wireless Evacuation on m Rays with k Searchers Sebastian Brandt, Klaus-Tycho Foerster, Benjamin Richner, and Roger Wattenhofer SIROCCO Link

2016

TitleAuthorsConferenceLink
A Lower Bound for the Distributed Lovász Local Lemma Sebastian Brandt, Orr Fischer, Juho Hirvonen, Barbara Keller, Tuomo Lempiäinen, Joel Rybicki, Jukka Suomela, and Jara Uitto STOC Link
On Consistent Migration of Flows in SDNs Sebastian Brandt, Klaus-Tycho Förster, and Roger Wattenhofer INFOCOM Link
Augmenting Anycast Network Flows Sebastian Brandt, Klaus-Tycho Förster, and Roger Wattenhofer ICDCN Link

2015

TitleAuthorsConferenceLink
Toehold DNA Languages are Regular Sebastian Brandt, Nicolas Mattia, Jochen Seidel, and Roger Wattenhofer ISAAC Link

Journal Papers

TitleAuthorsJournalYearLink
Lower Bounds for Maximal Matchings and Maximal Independent Sets Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela Journal of the ACM 2021 Link
Breaking the Linear-Memory Barrier in MPC: Fast MIS on Trees with Strongly Sublinear Memory Sebastian Brandt, Manuela Fischer, and Jara Uitto Theoretical Computer Science 2021 Link
Almost Global Problems in the LOCAL Model Alkida Balliu, Sebastian Brandt, Dennis Olivetti, and Jukka Suomela Distributed Computing 2021 Link
A Tight Lower Bound for Semi-Synchronous Collaborative Grid Exploration Sebastian Brandt, Jara Uitto, and Roger Wattenhofer Distributed Computing 2020 Link
A Tight Lower Bound for the Capture Time of the Cops and Robbers Game Sebastian Brandt, Jara Uitto, and Roger Wattenhofer Theoretical Computer Science 2020 Link
Wireless Evacuation on m Rays with k Searchers Sebastian Brandt, Klaus-Tycho Foerster, Benjamin Richner, and Roger Wattenhofer Theoretical Computer Science 2020 Link
Online Graph Exploration on a Restricted Graph Class: Optimal Solutions for Tadpole Graphs Sebastian Brandt, Klaus-Tycho Foerster, Jonathan Maurer, and Roger Wattenhofer Theoretical Computer Science 2020 Link
Approximating Small Balanced Vertex Separators in Almost Linear Time Sebastian Brandt and Roger Wattenhofer Algorithmica 2019 Link
Augmenting Flows for the Consistent Migration of Multi-Commodity Single-Destination Flows in SDNs Sebastian Brandt, Klaus-Tycho Foerster, and Roger Wattenhofer Pervasive and Mobile Computing 2017 Link

A Glimpse into What We Do

For everyone who wants to know more about what kinds of problems we study and what kinds of questions we ask, let me try to give you an introduction to what we're interested in (for another introduction with lots of pictures, see here). To make things as accessible as possible, we will take a look at a typical problem we would like to solve and explain some settings and concepts along the way. Let me say up front that this will not give you anything resembling a comprehensive overview of the (fairly vast) field of distributed algorithms; what we will discuss below will hopefully give you a good first idea of what distributed graph algorithms are about, but we won't have time to touch upon many fascinating research directions of the distributed world, such as distributed complexity theory, distributed lower bounds, the role of randomness, and many more. Alright, let me say at least a few words about the three directions I mentioned, before going to the meat of the introduction and looking at a concrete problem. If some of the terms I'm using do not make sense right away, I hope they will after reading the later part containing the proper introduction.

In distributed complexity theory, we want to map the landscape of time complexities that distributed problems can exhibit. One recent highlight is that for certain problem classes we now fully understand how this landscape looks like; fascinatingly, there are large gaps in the complexity landscape that one can use for speed-up results (just show that the time complexity of a problem is below the higher end of the gap, then we know that its complexity is at most the lower end of the gap, and usually this argumentation even produces an algorithm with a corresponding runtime).
Regarding distributed lower bounds (about which not too much was known, say, 7 years ago), the last years have seen an explosion of new results and techniques, and have shown that creativity is as important in proving impossibility results as in designing algorithms. A very successful technique, in its simplest form, transforms any given (too) fast algorithm into an even faster one, showing that such fast algorithms cannot exist. Another very creative technique consists in transforming a supposedly existing algorithm into a set of two-player games and obtaining a contradiction by showing that at least one of the games does not have a winning strategy for either player.
Regarding the role of randomness, there are intriguing results about how much randomness can help in solving a problem (e.g., often not more than exponentially), but as in the previous two research directions, there are many open questions. Some of these are related to a fascinating use of randomness in algorithm design, called shattering, where the idea is to have a fairly simple randomized process that shatters the input graph into small pieces, which then can be solved with the best available deterministic algorithm.

In all three directions, and in distributed graph algorithms in general, there is a lot we already understand, but even more that we don't know how to do. There have been a number of breakthroughs in recent years that provide new tools and techniques to work with, but for many open problems it is expected that something new is needed, e.g., some creative algorithmic idea that no one has thought of or a different way to look at things. I consider myself fortunate that I happened to stumble into this world at an almost ideal point in time: currently, a lot is happening in distributed algorithms, and our group will gladly contribute to this progress.

After this slight deviation, let me come back to the initial plan of showing you a typical problem that we would like to solve. This won't be just some arbitrary problem, but one of the really big open problems in our area. Still, after laying a bit of groundwork, it is not hard to describe. In fact, one could immediately start thinking about possible approaches for solving it, without knowing anything about techniques in our area. Of course, knowing some techniques will usually help a lot, but it isn't impossible that some fresh and unbiased idea is exactly what is needed to make progress.

In the area of distributed graph algorithms, which is currently our group's main research focus, the problems we want to solve are usually, well, graph problems. The problem we will be looking at is a simple graph coloring problem: we want to color the vertices of whatever graph is given to us with $\Delta + 1$ colors such that no two neighbors receive the same color. Here, $\Delta$ denotes the maximum degree of the graph, i.e., the degree of the vertex with the largest number of neighbors. A simple greedy argument shows that a vertex coloring with $\Delta + 1$ colors always exists: one can simply iterate through the vertices in an arbitrary order, one by one, and assign each vertex a color that none of its neighbors has received so far. Since each vertex has at most $\Delta$ neighbors, such a color always exists. Note that this argument doesn't work anymore if we have only $\Delta$ colors available; in fact there are graphs that do not admit a $\Delta$-coloring.

While knowing that a $(\Delta + 1)$-coloring exists is already nice, as algorithm researchers we want to know more, namely, how and how fast we can compute such a $(\Delta + 1)$-coloring. However, as researchers in distributed algorithms, we are not satisfied with finding just some fast coloring algorithm: we want to design a fast algorithm that computes such a coloring in a distributed fashion. What does that mean? Let me explain our setting.

Imagine that each vertex of the input graph is a separate computational entity and that each edge is a communication link between the two entities corresponding to its endpoints. In other words, you can imagine the graph to be a computer network, and all computers work together to solve the given problem by sending around messages. In our case, each computer would like to choose a color that no neighbor has. Doesn't sound so complicated? If we don't care about how long it takes, that's correct. But we would like to have an algorithm that is very fast as a function of the size of the network, as nowadays networks can be very large (think of the internet). What does "fast" mean? And how exactly do these computers work together? Alright, let's be a bit more precise.

The joint computation performed by the vertices (i.e., by the associated computational devices) proceeds in synchronous rounds $r = 1, 2, 3, \ldots$. In each round each vertex is allowed to send an arbitrary large message to each of its neighbors. Moreover, at any point in time any vertex can (internally) perform any computation on the knowledge it has gathered so far (e.g., in order to decide which message to send in the next round). Each vertex has to decide at some point that it terminates upon which it has to provide its output (which in the $(\Delta + 1)$-coloring problem would be its own color) and doesn't participate anymore in the computation. What we would like to now (and what we call the runtime of the computation/algorithm) is simply how many synchronous rounds it takes until the last vertex terminates. Of course we want the vertices to terminate as fast as possible, but always under the condition that the (combined) provided output is correct (e.g., in our case it is required that the combined output constitutes a proper $(\Delta + 1)$-coloring). The focus on the number of rounds implies that internal computation is essentially free; all we care about (for now) is how many message-passing rounds we need. (For everyone from the maths side: there is an equivalent and very clean way to define the setting in a non-algorithmic fashion; we will provide details further below.)

As algorithm designers, our task is to tell the vertices what messages they should send around and which internal computations they should perform (before we know anything about the input graph). In other words, we are supposed to provide one algorithm, one list of instructions, that is given to and executed by every vertex of the input graph. Of course, this list can contain instructions that are dependent on, e.g., the knowledge or degree of the vertex executing the instructions; for instance, an instruction could tell a vertex to perform some action only if its degree is at least $5$, or if it already received a certain kind of message. Due to the fact that each vertex executes the same algorithm, it is usually assumed that each vertex has a unique numerical identifier, i.e., each vertex is given some positive integer that no other vertex is given (but we have no control over which vertex gets which identifier). This ensures that even in very symmetrical input graphs, different vertices may perform different actions despite executing the same algorithm.

Let's have a look at a simple example algorithm. Say, we are guaranteed that the input graph is a directed cycle and we would like to compute a coloring with at most $6$ colors. Here is a very fast algorithm for this problem, devised by Cole and Vishkin in the '80s.

Initially each vertex interprets its unique identifier as a color (where a color is nothing else than some positive integer). Then, in each round, each vertex chooses a new color such that the size of the color space shrinks, until it has size $6$. More specifically, the colors are updated in each round by doing the following. First, each vertex sends its own color to its parent. Then, each vertex computes the smallest index $i$ in which its own color, represented as a bit string, and the color it received from its child, also represented as a bit string, differ. Finally, each vertex updates its color $c$ to the new color $2i+c(i)$, where $c(i)$ denotes the bit of color $c$ at index $i$.

We claim that after each round, the new coloring is also proper. Our argument is that each vertex $v$ makes sure that it has a different color than its child. Why is that the case? Well, by the definition of $i$, if the child of $v$ also happens to use the same index $i$ for its new color, then the bit it adds will not be the same as the bit $c(i)$ that $v$ adds. Hence, $v$ and its child differ either in $i$ or in $c(i)$ (or in both), which ensures that their new colors are different. So we always preserve a proper coloring.

Now, how large is the runtime of our algorithm? Well, roughly speaking, since as the new color we choose an index of a bit string that represents the old color, the new color is exponentially smaller than the old one! (The factor $2$ and the added bit can essentially be ignored for the analysis (they do not change the number of rounds asymptotically), but they are relevant in limiting how small we can make our final color space: if the color space has size $6$, then it doesn't shrink anymore.) We conclude that even if the space from which our unique identifiers (that we use as the initial coloring) come is very large (say, the number of atoms in the observable universe), the algorithms is very very fast. If the identifier space is called $S$, then, up to a constant factor, the number of rounds our algorithm takes is $\log^* S$, which is the minimum number of times one has to iteratively apply the logarithm function to $S$ to reach a value that is at most $1$. If $S = 10^{80}$, then $\log^* S = 5$, so our algorithm is really fast. Usually, the identifier space is chosen to be of size polynomial in the number $n$ of vertices of the input graph, in which case we can write the runtime of our algorithm as a function of $n$ (as desired): the algorithm terminates in $O(\log^* n)$ rounds.

After discussing the setting and looking at an example algorithm, let's now come back to the problem of computing a $(\Delta + 1)$-coloring. The best (deterministic) upper bound currently known is due to a result by Ghaffari and Kuhn from 2021. It states that a $(\Delta + 1)$-coloring can be computed (in the setting explained above) in $O(\log^3 n)$ rounds. This is already a decent runtime, but the best lower bound that is known is just a lower bound of $\Omega(\log^* n)$ (see, e.g., here)! So there is quite a large gap between lower and upper bound, and according to our current knowledge it is still possible that there is an $O(\log^* n)$-round algorithm for $(\Delta + 1)$-coloring, i.e., an incredibly fast algorithm.

It is worth mentioning that we are interested in this problem not only because there is a large gap in our knowledge and potential for a very fast algorithm. No, $(\Delta + 1)$-coloring is also (or rather primarily) a very important problem because of its usefulness for solving other problems. There are many distributed algorithms (for very different problems) that start by computing a $(\Delta + 1)$-coloring and then, e.g., use the computed colors as a schedule for further computations. Hence, we can phrase one of the big open problems in the area of distributed algorithms as follows.


How fast can we compute a $(\Delta + 1)$-coloring distributedly?


Any progress in improving the upper or lower bounds would be of great interest to the distributed community! E.g., turning the $O(\log^3 n)$-round algorithm into an $O(\log^2 n)$-round algorithm or just improving the lower bound from $\Omega(\log^* n)$ to $\Omega(\log^* n)^2$ would likely provide fascinating new insights into this important problem.

While this is all I wanted to say about $(\Delta + 1)$-coloring, let me come back to something I mentioned earlier, namely that there is an equivalent non-algorithmic way of defining our setting. If you think a bit about which messages a vertex should send to its neighbors, then you will realize that due to the fact that we do not restrict the message size in any way, each vertex should always send everything it has learned so far. If each vertex follows this rule, then in a $T$-round algorithm every vertex learns exactly its $T$-hop neighborhood in the graph (obviously it can learn nothing beyond distance $T$ since the information doesn't have enough time to travel that far). In other words, each vertex decides on its output purely by looking at its $T$-hop neighborhood. As such, we can understand a $T$-round algorithm as a function from the space of all possible vertex $T$-hop neighborhoods to the space of allowed vertex outputs (where the function describes what the center vertex of the $T$-hop neighborhood outputs). Having both the algorithmic message-passing definition and the non-algorithmic function definition is very useful; depending on the studied problem and the objective (algorithm, lower bound, ...), it could be either perspective that makes it easier to come up with a good approach (while technically one can always rephrase an algorithm given in the one formalism in the other formalism and vice versa).

I hope that I managed to give you at least a rough idea of what kinds of problems and questions we are interested in. Of course, what we have seen so far constitutes only a small part of the types of problems, questions, and settings that we study; as mentioned before, distributed algorithms is a vast field with lots of exciting open problems! If you want to learn more about distributed algorithms, have a look at some of the excellent lecture notes accompanying courses in this area (such as these or these).

.