Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Physics for Machine Learning

PDEBENCH: AN EXTENSIVE BENCHMARK FOR SCI- ENTIFIC MACHINE LEARNING

Makoto Takamoto · Timothy Praditia · Raphael Leiteritz · Dan MacKinlay · Francesco Alesiani · Dirk Pflüger · Mathias Niepert


Abstract:

Despite some impressive progress in machine learning-based modeling of physical systems, there is still a lack of benchmarks for Scientific ML that are easy to use yet challenging and representative of a wide range of problems. We introduce PDEBENCH, a benchmark suite of time-dependent simulation tasks based on Partial Differential Equations (PDEs). PDEBENCH comprises both code anddata to benchmark the performance of novel machine learning models against classical numerical simulations and ML baselines. Our proposed set of benchmark problems contribute the following features: (1) A much wider range of PDEs compared to existing benchmarks, ranging from relatively common examples to more realistic problems; (2) much larger ready-to-use datasets compared to prior work, comprising multiple simulation runs across a large number of initial and boundary conditions and PDE parameters; (3) more extensible source codes with user-friendly APIs for data generation and obtaining baselines of popular machine learning models (FNO, U-Net, PINN, Gradient-Based Inverse Method).PDEBENCH allows users to extend the benchmark freely for their own purposes using a standardized API and to compare the performance of new models to existing baseline methods. We also propose new evaluation metrics in order to provide a more holistic understanding of model performance in the context of Scientific ML.

Chat is not available.