WP5: Optimization

WP5 advances exascale-ready optimization methods for continuous, combinatorial, and mixed problems, with surrogate-based and AI-driven approaches.

Recent Highlights (2024-2025)

Bayesian at LUMI

Bayesian optimization at 8000 GPUs on LUMI supercomputer with fractal decomposition

WP2 Integration

Surrogate-based workflows coupled with WP2 ML models

Simulation-Based

High-performance simulation-based optimization

Deep GPs

Multi-fidelity modeling with deep Gaussian processes

AutoML for HPC

AutoML experiments on HPC software parameter tuning

1. Objectives

WP5 focuses on designing and implementing exascale optimization algorithms for large-scale problems:

  • Combinatorial, continuous and mixed optimization using exact and approximate algorithms

  • Surrogate-based optimization using multi-fidelity models

  • Shape optimization for multiphysics problems

  • AutoML for automatic design of deep neural networks

2. Approach

Diagram

3. Key Tasks

T5.1: Exascale Combinatorial and Continuous Optimization

  • Design general exascale optimization algorithms

  • Exact algorithms: branch and bound, tree search

  • Approximate algorithms: evolutionary algorithms, swarm intelligence

  • Decomposition-based exascale optimization for large-scale problems (LOPs)

  • Define efficient decomposition strategies in decision and objective spaces

T5.2: Exascale Surrogate-Based Optimization

  • Adapt SBO algorithms to exploit exascale HPC systems

  • Evaluate multiple candidate solutions in each iteration

  • Exploit multi-fidelity models (MFMs)

  • Design new parallel SBO algorithms considering:

    • The surrogate

    • The optimizer and its sampling strategy

    • The coupling between them

  • Use machine learning for expensive functions using SBO and MFMs

T5.3: Exascale Shape Optimization

  • Develop toolbox for shape optimization at exascale

  • Study different meshes for state variable and design variable

  • Use reduced models from WP2 (e.g., neural networks) for faster evaluations

  • Find trade-off between cost and accuracy

T5.4: Exascale Optimization for AutoML

  • Develop optimization approaches for automatic design of deep neural networks (DNNs)

  • Optimize hyper-parameters (AutoML)

  • Address complex problems (dataset and network size)

  • Improve DNN accuracy, reduce energy and inference time

  • Improve robustness and solve large-scale/complex learning tasks

4. Leads & Partners

Lead Institution

Inria

Co-Leaders

UNISTRA

Duration

Months 1-60

5. Addressed Exascale Bottlenecks

WP5 targets bottlenecks B7, B9, B10, B13:

  • B7 (Exascale Algorithms): Redesigning optimization algorithms to improve scalability

  • B9 (Resilience, robustness and accuracy): Ensuring robust optimization with verifiable results

  • B10 (Scientific productivity): Providing tools for scientists to use exascale systems productively

  • B13 (Opportunity to integrate uncertainties): Optimization under uncertainty

6. Deliverables

ID Title Due Dates

D5.1-MR

Activity reports (included in annual report D0.2-TR)

M12, M24, M36, M48, M60

D5.2-S

Software package for optimization and shape optimization

M36, M48, M60

D5.3-B

Benchmarking analysis report (bottlenecks and breakthroughs)

M12, M24, M36, M48, M60

7. Collaborations

  • WP2: Use of reduced models and neural networks (T5.3), multi-fidelity models (T5.2)

  • WP6: Optimization under uncertainty

  • WP7: Benchmarking analysis and integration

  • NumPEx Projects: ExaSoft, ExaDoST