Low-rank tensor techniques generalize low-rank matrix approximation to tensors of higher-order. They have established themselves as a powerful tool for addressing problems of very high dimension without having to face the curse of dimensionality. Typical problems include differential and eigenvalue equations whose solutions are functions of a large number of variables or parameters; they arise in quantum physics, computational chemistry, and stochastic or parametric PDEs. At the same time, low-rank tensor decompositions are successfully used in exploratory data analysis, signal processing, and statistics.
The key idea is old and simple: separation of variables. The sought solution is approximated by some structured linear combination of products of low-variate functions. This allows a data-sparse representation in a low-rank tensor format (of which there exist many) with a storage complexity that grows only linearly in the number of dimensions (variables) instead of exponentially. One can then design algorithms that solve the problem entirely in the chosen low-dimensional format.
In this lecture we introduce the basic concepts of this timely research field within numerical analysis. Intended topics include:
Prerequisites: numerical linear algebra / matrix analysis (in particular matrix norms, singular value decomposition), basic concepts of smooth manifolds and constrained optimization, spectral theory of compact operators on Hilbert spaces, basic Sobolev spaces (say L2 and Hs).
|Date & time:||Wednesday,||12.15–13.45 Uhr,||Wegelerstr. 6, SemR We 6.020|
The lecture is accompanied by an optional seminar for discussing additional topics.
|Date & time:||Wednesday,||14.15–15.45 Uhr,||Wegelerstr. 6, SemR We 6.020|
Perhaps not weekly. In case of interest, please come to first lecture.