..
Suche
Hinweise zum Einsatz der Google Suche
Personensuchezur unisono Personensuche
Veranstaltungssuchezur unisono Veranstaltungssuche
Katalog plus
/ fb6 / tcs / research /
 

Research Interests

  • Computability Theory, in particular computability on topological structures
  • Domain Theory
  • Complexity Theory, in particular Implicit Computational Complexity
  • Logic

Central to Theoretical Computer Science - if not to Computer Science generally - is the notion of algorithm. An algorithm for the solution of a problem or the computation of a function is a finite unambiguous instruction for the execution of simple finite operations on a (digital) machine, which after finitely many steps will yield a result for any special case of the given problem class and/or any argument of the given function.

Of course, this is not a precise mathematical definition. The formalisation of the above described intuitive notion of algorithm is one of the greatest intellectual achievements of the last century. The notion was carefully analysed and various formalisms - partly of quite different nature - were proposed. As turned out, all these formalisms result in the same notion of computable function (over the natural numbers or the finite words over some fixed alphabet, respectively). This led to the by now generally accepted thesis that an adequate (mathematically precise) formalisation of the intuitive notion of algorithm has been obtained (Church-Turing Thesis).

Once this step had been done, new question came up:

  • What can be done algorithmically, i.e., which problems can be solved and which functions can be computed mechanically?
  • Are there any limits?
  • Which resources (time, space, ...) are needed at least (at most) in order to solve a given problem or compute a given function?
  • Given a program, does it really do what it is supposed to do?

Questions like these are dealt with in Computability Theory, Complexity Theory and Theory of Programming, respectively.

Usually in computability and complexity investigations, functions and problems are considered that are defined on finite data like the natural numbers or finite words over some alphabet. In many areas as logic and graph theory it is sufficient to consider such data sets. In physics and engineering, however, one has to deal with continuous data like real numbers. Already in the very beginning of computability theory people started to think about what it means to compute a real number or a real-valued function. In the last years interest in these questions has much increased. There are plenty of reasons. Of course, one wants to study questions as those listed above in the area of Scientific Computation.

In general, in numerical computations one uses floating-point numbers to approximate real numbers. Since the domain of such numbers that can be used in an actual machine is limited, this leads to rounding errors. Results may be unreliable. Even worse than this: the algebraic structure of the collection of these numbers is much different from the structure of the reals. This means that the usual implementations of the data type of the reals are not sound with respect to the assumed semantics.

A large part of the above mentioned modern research in Theoretical Computer Science can be subsumed under the slogan Computing with Exact Real Numbers. To this end the space of the reals is enlarged by including also partial reals, i.e. objects which are used for approximation - mostly closed rational intervals - and which are the intermediate results during a computation. Programming languages are designed that work with the extended real number data type. By this way sound implementations are obtained. As a consequence formal correctness proofs can be given. Normally, first writing a program and then deriving its correctness is a tedious task. A more promising way is to first prove a statement of the form (∀ x)(∃ y)R(x,y) and then try to extract a program that computes a function f such that (∀ x)R(x,f(x)). This insight has led to a huge research programme that aims at reworking large parts of analytical mathematics so that its algorithmic and/or constructive content becomes more visible.

In mathematical programming language semantics one preferably deals with a structure called domain and generalisations hereof. Domains introduced by Dana Scott are lattice-like and come with a canonical topology that reflects essential properties of computations. Topology and order structure determine each other. Usually the order is interpreted as the relation ``is more defined than'' or ``contains more information than''. As a consequence, in the language of this structure only qualitative approximation statements can be made. In analytical mathematics, on the other hand, we are used to deal with a metric structure and to make quantitative approximation statements. Fortunately, it has turned out that domains allow weak notions of distance. This has led to several new questions.

Investigations in complexity theory usually are based on a very simple machine model, the Turing machine. The use of time or space of a computation on this machine is studied. Having these results in mind the programmer can choose the algorithm that suites best his needs. However, the structure of Turing programs corresponds only badly to the structure of modern programming languages. He would need results allowing to estimate the consumption of resources by considering only the structure of the program, in particular the control structures herein. This is the aim of a new branch of complexity theory: Implicit Computational Complexity.