MS054 - Kernel Approximation Methods
Keywords: machine learning, reproducing kernel Hilbert spaces, high-dimensional approximation, Scattered data approximation
Kernel-based approximation methods are powerful and versatile tools in numerical analysis and scientific computing. They are widely used for function reconstruction from scattered data, with applications ranging from surface reconstruction and image processing to geostatistics, and machine learning. In the context of partial differential equations, kernel methods provide meshfree alternatives to traditional discretizations, enabling flexible and high-order accurate solvers that are particularly attractive in scattered or evolving domains.
Kernel methods typically rely on positive definite kernels to interpolate or approximate a target function based on a finite set of data sites and corresponding values. One of the key advantages of kernel methods is their flexibility in handling irregular data and complex geometries. However, their practical application poses significant computational challenges. The global support of commonly used kernels results in dense and often ill-conditioned linear systems of equations, which can hinder scalability and numerical stability. Furthermore, kernel methods suffer from the curse of dimensionality when applied to high-dimensional problems, with increasing computational cost and data requirements.
The aforementioned challenges have motivated the development of a wide range of techniques to improve the efficiency and robustness of kernel methods. These include localization and sparsification strategies, compactly supported and multiscale kernels, hierarchical and multilevel approaches, domain decomposition techniques, and hybrid methods that combine kernel approximation with frameworks such as sparse grids or reduced bases.
This minisymposium brings together researchers working on both the theoretical and computational aspects of kernel approximation. Topics will include stability and convergence analysis, structure-exploiting numerical algorithms, scalable solvers, and recent advances in adaptive and data-driven kernel methods. Applications to partial differential equations, inverse problems, and learning tasks will also be featured.
