### Workshop

**Algorithms & Complexity**

in High Dimensions

A 5 day fall school and workshop on the fields of IBC, stochastics, and numerical analysis.

**30.09.2019 – 04.10.2019 | Graz | Austria**

**Start of registration: 01.03.2019**

**Deadline: 31.08.2019**

Information-based complexity (IBC) studies optimal algorithms and computational complexity of (continuous) mathematical problems for which the information is partial, contaminated, and priced. The typical question is how much information about a given problem is needed to solve it within a given error threshold, and under which circumstances one can avoid the curse of dimensionality. Important examples include high-dimensional numerical integration, approximation or continuous optimization.

Stochastic differential equations (SDEs) are one of the most prominent topics of modern applied probability. A classical and robust method is the well-known Euler Maruyama scheme, which is known to converge under Lipschitz assumptions on the coefficients of the SDEs. Numerous questions arise about convergence and optimality of numerical methods under stronger or weaker regularity assumptions.

Discrepancy theory deals with the distribution of points in compact spaces, measuring the ‘discrepancy’ between empirical distribution and target distribution. In other words, how far does a given distribution deviate from an ideal one. Arguably the most prominent problems are the star-discrepancy problem, asking for the optimal rate of convergence of the supremum of the discrepancy function, and the inverse star-discrepancy problem, asking for explicit constructions of point sets whose discrepancy depends at most polynomially on the dimension.

University of Jena

**What is Information-Based Complexity?**

We give a short introduction to IBC and present some basic definitions and a few results. The general question is: How many function values (or values of other functionals) of a function f that depends on d variables do we need to compute S(f) up to an error ϵ? Here S(f) could be the integral or the maximum of f. In particular we study the question: When do we have the curse of dimension, i.e., the computing time increases exponentially with the dimension d?

As it turns out, smoothness properties alone usually do not imply tractability, we need assumptions on the structure of f. Another way to obtain tractability is, for some problems,

the use of randomized (Monte Carlo) algorithms.

RICAM Linz

**Constructions of integration rules**

We discuss the problem of how to efficiently find and/or construct node sets for efficient high-dimensional integration rules. While for a few methods explicit constructions are known, it is in most cases neither easy nor computationally efficient to identify node sets that yield a small integration error. For some very prominent kinds of integration rules, such as rank-1 lattice rules, there are even no explicit constructions known for dimensions d > 2. In such cases, one has to resort to computer search algorithms. We present selected results and algorithms for the construction of high-dimensional integration rules with many points, such as (t, m, d)-nets, lattice rules, and polynomial lattice rules. Under suitable conditions, these rules show almost optimal error convergence and imply tractability.

University of Passau

**Approximation of SDEs**

We provide a brief introduction to stochastic differential equations (SDEs) and then study the complexity of approximating the solution X=(X_t)_{t\in[0,1]} of an SDE. We focus mainly on pathwise approximation of X, i.e. we study questions of the type: how well can X_1 be approximated in the mean square sense based on n evaluations of the driving Brownian motion at times t_1,\dots,t_n\in [0,1]? Can we reduce the error significantly if we allow for choosing the evaluation times t_i in a path-dependent way? We present corresponding classical results for SDEs with Lipschitz continuous coefficients and provide an insight into recent developments in that field of research for SDEs with coefficients that do not satisfy standard smoothness assumptions.

JKU Linz

**On the power of random information**

The solution S(f) of a numerical problem often depends on a function f. For example, S(f) could be the integral or the maximum of f. It could also be the function itself. We want to compute S(f), but only have incomplete information about f: a certain a priori knowledge and the outcome of finitely many measurements. We therefore cannot avoid to make an error. The measurements might be point evaluations or other linear functionals. Usually we assume that we can choose the measurements at will. We try to choose the measurements in a way that allows us to minimize the error. In this case, we talk about optimal information. In this talk we assume that we do not get to choose the measurements. Instead, we imagine that the information comes in randomly and ask: How much do we loose in comparison to optimal information? We study this question for several examples, where the answers range from *almost nothing *to *almost everything*.

University of Graz

**Simulation of Stochastic Processes and Stochastic Differential Equations**

This unit gives a basic introduction to stochastic processes including the most prominent example of a Brownian motion. Moreover, the construction of the stochastic integral – the so called Itô integral – is carried out and some of its properties are given. Finally, the basics of stochastic differential equations are explained and some first approximation schemes including the Euler-Maruyama method and the Milstein method are introduced and examined.

**Gunther Leobacher, ****Joscha Prochno**, **Stefan Kremsner**

Institute of mathematics and scientific computing

Heinrichstraße 36

8010 Graz

**Gunther Leobacher
**

Institute of Mathematics and Scientific Computing

University of Graz

Heinrichstraße 36

8010 Graz, Austria